AnsweredAssumed Answered

Large application scanning frustrations

Question asked by void on Aug 5, 2015
Latest reply on Aug 11, 2015 by fmc

Full disclosure: I'm just starting to roll out full use of WAS in our environment but my first undertaking, thanks to our lovely friends in the audit dept, is one of our larger applications which runs on Websphere Portal Server.  My initial approach was just to do a blast of the site starting at the root level and having the maximum links crawled be 1000.  Well suffice to say that was probably a mistake; too much noise.


After a little reconnaissance, I realized there was some logical structure to the URI paths right before a massive alpha-numeric string.  So for example, I managed to start breaking things up as follows:


NOTE: Authentication was successfully used for each of these scenarios. (starting point)


Web App #1 -

Web App #2 -

Web App #3 -


... and so on.  The initial discovery scans for these seemed promising.  They were still in the 400 range (Option Profile was 500 max links crawled) but again, I am by no means an expert in Websphere and its inner-workings so I thought things were going well.


I kicked off a vuln scan on say App #1 and it completed very fast saying now that it only vuln-scanned 18 links.  Ok I thought, maybe that one didn't have much to interact with for the scanner and thus only actually had activity against 18 pages.  I did the second site, which I had manually clicked through during my quick manual crawling activities, and it came back with almost the same vulns and the same number of pages crawled.  Without knowing a boatload about the app it seemed fishy to me at first.  Did them both again and same results. Is this right?


Suffice to say I do have a meeting with the application team to get as much detail as possible but my experiences thus far have highlighted a few areas that I'd like to understand how other folks who have alot of success using WAS attack similar situations:


1) How do you handle sites with dynamic sort of content that may skew discovery/vuln scans?

2) What is the best approach at slicing/dicing very large and complex apps?

3) What determines a 'crawled' link when doing a vuln scan vs a discovery scan (e.g. in my instance the discovery had hundreds of crawled links but then only had 18 crawled links during a vuln scan)?


Any other help or tips that can be provided is greatly appreciated!