Web Fuzzing

So I am currently working on creating a Web Application/Service Fuzzer. This seemed like an easy enough task, except for the fact that its never really been done yet… to the extent I’m aiming for.

There have been some tools out there for fuzzing. Some fully functional tools for Web Apps, like SPIKE from Immunity, however this isn’t automated. There are some sweet intelligent fuzzing frameworks like Peach, and Sulley, but these have the same limitations; in addition to not being aimed at the web. I found another sort of framework called RFuzz. Now this was more suited for the web, yet even less automated then the other tools.

My specifications make the project a bit more formidable. I need to create an intelligent fuzzer, which can be centrally hosted on an intranet, which is accessible via simple web request forms. I will explain my design, and its difficulties below:

My solution contains 3 different modules: A Fuzzing Engine, Web UI, and Enumerator/Crawler.

Fuzzing Engine: The engine I decided on was RFuzz. Because of its natural target of web applications, I felt it would be most accurate  for the job. Creating this modularly would involve creating a wrapper which would parse enumerated attack points and loop them through the fuzzing process. The output wrapper would then format the results into XML for easily integration into the Web UI; where users can review the results. I am basing most of this portion off work done by Rune Hammerland in his masters thesis. This portion of the project shouldn’t be too bad, except for the fact that I have to learn Ruby to do it. Good thing Ruby is pretty straight forward 😉

Enumerator/Crawler: There are a lot of web crawling libraries/tools out there; hpricot being a big one. The one weakness of hpricot is that it can’t crawl AJAX. You see, in AJAX you can essentially bind different dynamic states to nearly any HTML element you want. In normal static pages you can simply focus on the anchor (<a>..</a>) tags, and button elements. However, in AJAX the possibilites go far beyond that. Fortunately, I found a sweet project called Crawljax; written in Java which makes me happy. Crawljax can completely exercise any element on any webpage you decide; making it ideal for crawling AJAX pages. The hard part is outputting these results in a workable format. I have been having issues trying to figure out Crawljax’s output. It seemed straightforward at first:

config.setOutputFolder(“/tmp/”);

but I think perhaps its bugged.

Instead I might try wrapping the whole request in WebScarab and intercepting the requests. The benefit of this is the functionality of WebScarab libraries. Modifying parameter fields in WebScarab will be much easier that trying to parse them out of a Crawljax dump.

Doing the crawling of Web Services is simple. Direct the enumerator at the wsdl/wadl, and parse out the target fields. Easy enough.

After the whole page is crawled, the output then needs to be formatted to be sent over to the fuzzer.

Web UI: Easy enough. Just need a little Java interface to submit the requests.

Overall the project will be fun. If you have any ideas or input, please let me know. Im currently looking at a Java Enumerator/Crawler, a Ruby Fuzzer, and a HTML/PHP/Java Web UI.

Ill keep you updated!!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: