Websites
Information about web servers that were identified.
The Websites section contains all of the web-related data observed while crawling targets that have accessible web servers. This data is monitored for changes and scanned for malware and phishing links, along with other common risks. Assets that are discovered on out-of-scope hosts are also listed, allowing you to add these hosts as Targets.
The website scan is non-invasive and crawls the website cataloging assets and connections. It is included with the base target scan.
The Summary view provides a single pane listing of all findings across the various types of assets, along with their risk scores and hosts they were identified on.
Many parts of the Website section scan elements for Malware, Phishing, and Adult content. We use third-party services, such as Google WebRisk and Phishtank, to flag this content based on the host or URL.
Elements
The website scan helps you catalog and monitor the following types of elements:
Certificates: TLS certificates that are in use and any data associated with them, including common name, subject alternative name, expiration, and any TLS protocol versions and ciphers that are offered.
Scripts: Javascript that is loaded on any Targets, including third-party scripts.
Cookies: Cookies that have been set, along with their attributes and expiration dates.
Headers: HTTP response headers that were observed and any platforms or software products that are associated with them.
Forms: Forms are listed with their destination host, and those that submit sensitive information like credit card numbers, passwords, and emails are identified.
Links: Any links that are found on your websites are scanned for malware for phishing content, and their response codes are checked to determine broken links.
Downloads: Similar to what we do with links, downloads are scanned for any malicious content and can be monitored by their file type and where they are hosted.
Traffic Hosts: All hosts that are observed sending traffic while rendering your website are recorded here.
Meta Tags: Meta tags embed descriptive information about your websites and are used for various purposes such as SEO and directing the behavior of web crawlers. We extract meta tags during the crawling process.
Pages: Pages are generated based on the URLs that are discovered during the crawling process.
Scan Details
By default website scans are set to crawl the site for:
Up to 3 hours or 10,000 total pages.
They do not access pages that require a login or authentication.
The IP addresses of the scanner are…
The crawler can be blocked by a Web Application Firewall (WAF). If we detect this, we'll create an issue to alert you.
Last updated