top
logo
custom iconResources
custom iconFeature overview
language-switch
success

No signs of automation detected

Webdriver

Webdriver

CDP

CDP

User-Agent

User-Agent

Navigator

Navigator

Bot threat protection

Fingerprinting combined with bot detection effectively combats online fraud, achieving optimal results when working together.

DevTools Protocol Bot Detection

Detects whether developer tools or protocols are used to simulate browser control.

Navigator

This detection method checks if properties of the browser's Navigator object have been altered or spoofed to prevent detection by bots or automation tools.

Bot detection guide

This detection mechanism identifies the presence of automated agents or bots in browsers or scripts. By analyzing attributes such as browser fingerprinting, JavaScript execution capabilities, WebRTC status, Canvas rendering patterns, Navigator object properties, and plugin information, the system identifies whether the current session is human or automated by comparing key human interaction signals, such as mouse movements, screen dimensions, and CAPTCHA-solving behavior. By analyzing HTTP headers, browser fingerprint, and TLS fingerprint, the system detects anomalous requests originating from automated agents or malicious bots. Currently, major human verification systems, including Cloudflare Turnstile, Google reCAPTCHA, and hCaptcha, integrate similar detection mechanisms. They perform an implicit assessment of visitors without requiring any additional action from the user.

Detection Result Classification

1. Verified Automated Agents (Good Bots)

These bots are legitimate and serve useful functions, such as search engine crawlers (Googlebot, Bingbot, Baiduspider), used for web scraping, indexing, and ensuring website availability. Monitoring tools (e.g. Pingdom, UptimeRobot) ensure site availability and performance tracking. Compliant data scraping services adhere to robots.txt protocols and clearly identify themselves in User-Agent. Although automated, these bots are website-friendly with regulated behavior and controlled request frequency.

2. Malicious Bots

These bots attempt unauthorized activities, such as scraping, data extraction, and brute force attacks. They often disguise themselves using fake User-Agent strings and implement various evasion techniques. Common characteristics include: Using automation tools like Selenium, Puppeteer, or Playwright; Simulating user behaviors like mouse clicks, keyboard inputs, form submissions; Bypassing security measures like login authentication and CAPTCHAs; Engaging in account takeover, credential stuffing, content scraping, price scraping, ad fraud, DDoS, etc. Even when User-Agent is spoofed, their behavioral patterns, TLS fingerprints, and JavaScript properties may still reveal automation traits.

3. Legitimate Traffic / Non-Bot Behavior

This category includes visitors that exhibit typical human behavior: using common browsers (Chrome, Safari, Firefox, Edge, etc.), supporting JavaScript, Cookies, and Storage/HTML properties, with randomized interaction patterns. These users are classified as regular visitors, with no abnormal behavior detected.

logo

Article Details