Websites
Information about web servers that were identified.
The Websites section contains all of the web related data observed while crawling targets that have accessible web servers. This data is monitored for changes as well as scanned for malware and phishing links and other types of common risks. Assets that are discovered on out of scope hosts are also listed, allowing you to add these hosts as Targets.
The website scan is non-invasive and crawls the website cataloging assets and connections. It is included with the base target scan.
The Summary view provides a single pane listing of all findings across the various types of assets, along with their risk scores and hosts they were identified on.
Many parts of the Websites section scan elements for Malware, Phishing, and Adult content. We use 3rd party services that flag this content based off the host or URL. These services are:
The website scan helps you catalogue and monitor the following types of elements:
- Certificates: TLS certificates that are in use and any data associated with them, including common name, subject alternative name, expiration, and any TLS protocol versions and ciphers that are offered.
- Headers: HTTP response headers that were observed and any platforms or software products that are associated with them.
- Forms: Forms are listed with their destination host, and those that submit sensitive information like credit card numbers, passwords, and emails are identified.
- Links: Any links that are found on your websites are scanned for malware for phishing content, and their response codes checked to determine broken links.
- Downloads: Similar to what we do with links, downloads are scanned for any malicious content and can be monitored by their file type and where they are hosted.
- Traffic Hosts: All hosts that are observed sending traffic while rendering your website are recorded here.
- Meta Tags: Meta tags embed descriptive information about your websites and are used for various purposes such as SEO and directing the behavior of web crawlers. We extract meta tags during the crawling process.
By default website scans are set to crawl the site for:
- Up to 3 hours or 10,000 total pages.
- They do not access pages that require a login or authentication.
- The IP addresses of the scanner are…
The crawler can be blocked by a WAF. If so, we'll create an issue to alert you.. Add our scanner IPs to the WAF allow list.
Last modified 1yr ago