LogoLogo
APISupportDashboard
  • Welcome
  • Platform
    • Platform Overview
    • Getting Started Guide
    • Discovery
      • Domain Discovery
      • Network Discovery
      • Discovered Assets
    • Targets
      • Scan Policies
      • Add-on Services
      • Tags & Filtering
        • Default Tags
      • Managing Targets
      • Scan Configuration
    • Risk
    • Firewalls
      • Ports
      • HTTP Servers
      • DNS Records
    • Websites
      • Certificates
      • Scripts
      • Cookies
      • Headers
      • Forms
      • Links
      • Downloads
      • Traffic Hosts
      • Meta Tags
      • Pages
    • Technology
    • Issues
      • Vulnerability Management
      • Issue Settings
    • Reports
    • Compliance
    • Projects
      • Penetration Testing
      • Other Projects
    • Events
      • Event Rules
      • Event Types
    • Automations
      • Target Automations
      • Asset Automations
      • Target Issue Automations
  • Integrations
    • Discovery Integrations
      • Azure
      • Google Cloud
      • AWS
      • Cloudflare
      • Oracle Cloud Infrastructure
      • F5
      • GoDaddy
      • Wiz
      • HTTP
    • Workflow Integrations
      • Slack
      • Google Chat
      • Jira
      • PagerDuty
      • Splunk
      • AWS
      • Vanta
      • Microsoft Teams
      • ArmorCode
      • Zapier
        • Slack (via Zapier)
        • Jira (via Zapier)
        • Service Now (via Zapier)
    • Feeds
      • Using Feeds with Google Sheets
    • API
    • Webhooks
  • Account
    • Account Overview
      • Account Security
    • Users
      • Roles & Permissions
Powered by GitBook

© 2024 Halo Security

On this page
  • Elements
  • Scan Details

Was this helpful?

  1. Platform

Websites

Information about web servers that were identified.

PreviousDNS RecordsNextCertificates

Last updated 3 months ago

Was this helpful?

The contains all of the web-related data observed while crawling targets that have accessible web servers. This data is monitored for changes and scanned for malware and phishing links, along with other common risks. Assets that are discovered on out-of-scope hosts are also listed, allowing you to add these hosts as Targets.

The website scan is non-invasive and crawls the website cataloging assets and connections. It is included with the base target scan.

The view provides a single pane listing of all findings across the various types of assets, along with their risk scores and hosts they were identified on.

Many parts of the Website section scan elements for Malware, Phishing, and Adult content. We use third-party services, such as and , to flag this content based on the host or URL.

Elements

The website scan helps you catalog and monitor the following types of elements:

  • : TLS certificates that are in use and any data associated with them, including common name, subject alternative name, expiration, and any TLS protocol versions and ciphers that are offered.

  • : Javascript that is loaded on any Targets, including third-party scripts.

  • : Cookies that have been set, along with their attributes and expiration dates.

  • : HTTP response headers that were observed and any platforms or software products that are associated with them.

  • : Forms are listed with their destination host, and those that submit sensitive information like credit card numbers, passwords, and emails are identified.

  • : Any links that are found on your websites are scanned for malware for phishing content, and their response codes are checked to determine broken links.

  • : Similar to what we do with links, downloads are scanned for any malicious content and can be monitored by their file type and where they are hosted.

  • : All hosts that are observed sending traffic while rendering your website are recorded here.

  • : Meta tags embed descriptive information about your websites and are used for various purposes such as SEO and directing the behavior of web crawlers. We extract meta tags during the crawling process.

  • : Pages are generated based on the URLs that are discovered during the crawling process.

Scan Details

By default website scans are set to crawl the site for:

  • Up to 3 hours or 10,000 total pages.

  • They do not access pages that require a login or authentication.

  • The IP addresses of the scanner are…

The crawler can be blocked by a Web Application Firewall (WAF). If we detect this, we'll create an issue to alert you.

Websites section
Summary
Google WebRisk
Phishtank
Certificates
Scripts
Cookies
Headers
Forms
Links
Downloads
Traffic Hosts
Meta Tags
Pages