Fraudulent tactics to evade detection are ever-changing, highlighting the need for fraud prevention methods that can also evolve. TransUnion’s new bot detection rule is our latest step in that approach.

When transactions contain features that indicate origination from an automated (non-human) client, our bot detection rule immediately fires on said transactions. These features cover a wide range of data, including statistics about a user’s physical interaction with their device, inconsistencies that suggest manipulated data, and traits of virtual devices or automation tools.

Before we dive into more detail, however, let’s get back to the basics. What exactly is a bot?

First things first: defining a ‘bot’

The term “bot” can be confusing, as its meaning may vary depending on the context. According to Wikipedia, an internet bot is a software application that runs automated tasks (scripts) over the Internet.1 Depending on its purpose, a bot can range from helpful to malicious, but in the context of internet fraud prevention, we’ve narrowed this definition to mean malicious bots used as tools to commit internet fraud and abuse.

Bots can automate repetitive tasks to generate a large amount of traffic very quickly. This might include simultaneously controlling multiple accounts on a social network or gaming site, or leveraging large sets of stolen credentials through automated login attempts. Bots can also generate traffic at superhuman speeds, and purchase tickets or products that may have limited availability.

The impact and scope of these automated attacks can vary widely. We’ve seen everything from a very small handful of transactions in a day (which sometimes fly under the radar), to massive attacks that produce millions of transactions in a single day.

There are bots, and there are botnets

It’s important to note that a bot is not a botnet. A botnet is a collection of devices (e.g. laptops, mobile phones, or IoT devices such as cameras) controlled by a single person, usually to perform some sort of distributed attack, to obscure the true IP source, or both.

Alternatively, a bot may reside on a device that’s infected by a botnet or send its traffic through a botnet, but that doesn’t have to be the case. A bot could also send automated traffic through a hosting facility, a residential IP address, a VPN service, or any other internet connection.

With automated attacks becoming more sophisticated in evading detection, our tactics must evolve

The new bot detection rule provides another tool to shed light on transactions that have traits associated with automation, but might not have a device, account, or even IP address link to known fraud risk. Using features collected from the client device during a transaction, it can identify potentially risky transactions even if they aren’t linked to known fraud via evidence, IP risk, etc.

We used several measures to rank the effectiveness of the potential features that went into the automated bot detection rule. Our traditional sources of risk labels (like subscriber-placed evidence) help to label risky transactions and can corroborate the effectiveness of a particular component of the rule, but what about situations in which we lack good labeling? Since automated attacks are not always identified this way, we instead have to look for indications of non-organic processes based on certain characteristics within traffic patterns. For example, here is a small sampling of conditions that the rule evaluates:

  • Physical interactions that would be unusual for humans
  • Virtualization and emulation tools
  • Manipulated device data

Attack vectors vary widely; one condition may suggest an automated attack, even while the others are absent.

When it comes to large-scale and/or persistent bot attacks, the rate at which the bot generates transactions does not follow the normal daily and weekly patterns for a given subscriber. Sometimes they are very spiky, generating many transactions in a short amount of time, while others may have transaction rates that are too steady. In reality, we often see a combination of these characteristics. For instance, we may see an abnormal traffic pattern and partial labeling of the transactions as risky. These deviations from normal traffic patterns provide another way for us to determine if a component of the rule is finding automated transactions.

The future of bot detection

TransUnion’s approach to detecting bots will continue to expand over time. As we get more feedback from our subscribers, we will continue to iterate and enhance automation detection based based on real-world performance.

To find out more about how our bot detection rule works within our fraud detection product suite, request a demo with us today.

Interested in learning about combating fraud stemming from botnet attacks? Take a look at our insight guide for more information.