Information about web servers that were identified.
The Websites section contains all of the web related data observed while crawling targets that have accessible web servers. This data is monitored for changes as well as scanned for malware and phishing links and other types of common risks. Assets that are discovered on out of scope hosts are also listed, allowing you to add these hosts as Targets.
The website scan is non-invasive and crawls the website cataloging assets and connections. It is included with the base target scan.
The website scan helps you catalogue and monitor the following types of elements:
- Certificates: TLS certificates that are in use and any data associated with them, including common name, subject alternative name, expiration, and any TLS protocol versions and ciphers that are offered.
- Meta Tags: Meta tags embed descriptive information about your websites and are used for various purposes such as SEO and directing the behavior of web crawlers. We extract meta tags during the crawling process.
By default website scans are set to crawl the site for:
- Up to 3 hours or 10,000 total pages.
- They do not access pages that require a login or authentication.
- The IP addresses of the scanner are…
The crawler can be blocked by a WAF. If so, we'll create an issue to alert you.. Add our scanner IPs to the WAF allow list.