Websites
Information about web servers that were identified.
Last updated
Was this helpful?
Information about web servers that were identified.
Last updated
Was this helpful?
The contains all of the web-related data observed while crawling targets that have accessible web servers. This data is monitored for changes and scanned for malware and phishing links, along with other common risks. Assets that are discovered on out-of-scope hosts are also listed, allowing you to add these hosts as Targets.
The website scan is non-invasive and crawls the website cataloging assets and connections. It is included with the base target scan.
The view provides a single pane listing of all findings across the various types of assets, along with their risk scores and hosts they were identified on.
The website scan helps you catalog and monitor the following types of elements:
: TLS certificates that are in use and any data associated with them, including common name, subject alternative name, expiration, and any TLS protocol versions and ciphers that are offered.
: Javascript that is loaded on any Targets, including third-party scripts.
: Cookies that have been set, along with their attributes and expiration dates.
: HTTP response headers that were observed and any platforms or software products that are associated with them.
: Forms are listed with their destination host, and those that submit sensitive information like credit card numbers, passwords, and emails are identified.
: Any links that are found on your websites are scanned for malware for phishing content, and their response codes are checked to determine broken links.
: Similar to what we do with links, downloads are scanned for any malicious content and can be monitored by their file type and where they are hosted.
: All hosts that are observed sending traffic while rendering your website are recorded here.
: Meta tags embed descriptive information about your websites and are used for various purposes such as SEO and directing the behavior of web crawlers. We extract meta tags during the crawling process.
: Pages are generated based on the URLs that are discovered during the crawling process.
By default website scans are set to crawl the site for:
Up to 3 hours or 10,000 total pages.
They do not access pages that require a login or authentication.
The IP addresses of the scanner are…
The crawler can be blocked by a Web Application Firewall (WAF). If we detect this, we'll create an issue to alert you.