I have been tasked with getting us going with application scanning. In order to check whether I understand what I am doing (I obviously don't) one of the first things I have done is to create a "Bad" web site on one of our test servers. This is not accessible outside our firewall and I'm certainly not putting any real data on it in any case. This Basstuff site has a form to enter, among other things, a field called SSN and formatted to accept strings that look like SSNs. There is another field called "Credit Card Number" that accepts 16-digit data, etc.
I deliberately left this site open to SQL injection and have confirmed that (normally) malicious SQL injection is possible. The data-entry default.asp page submits its data to a results page that show the entered data and there is one other page called showall that shows all the data previously submitted by anyone. I'm looking to follow "worst practices" here. I can imagine that there could be a worse site, but I don't see that it could be much worse and if anything ought to trigger high-impact warnings, this ought to.
However, when I scan this site, the scan result shows no vulnerabilities with no comment about the fact that it calls for SSNs, or credit card data, or SQL injection. Furthermore, no matter which crawl scope I select, the scan does not appear to crawl either to the page where the form data is submitted or to the showall page (to which I conveniently laft a link on the "main" data-entry page).
I'm not sure whether there is a nice way to extract a scan profile or whether including such would accord with community standards, but I would appreciate any clues as to what I am or am not doing that prevents the WAS scan from noticing the SQL injection or from crawling.