Powered by MOMENTUM MEDIA
cyber daily logo

Breaking news and updates daily. Subscribe to our Newsletter

Breaking news and updates daily. Subscribe to our Newsletter X facebook linkedin Instagram Instagram

Op-Ed: Can we trust user login credentials anymore? A solution exists with behavioural biometrics

In a recent Australian survey, 15 per cent of respondents reported they were a victim of cyber crime during the pandemic with 65 per cent of the respondents saying they have become more cautious about their online security during lockdown. Even when users employ enhanced security precautions like complex and unique passwords, two-factor authentication can add a significant security layer.

user iconCameron Church
Thu, 25 Mar 2021
Cameron Church
expand image

Despite various levels of end users’ cyber crime awareness, massive data leaks exposing user information persist in various platforms. The credentials that are compromised from such attacks fall into the dark web where cyber criminals deploy large botnets of compromised PCs, smartphones and IoT devices to test password and user credential pairs across thousands of websites. Once fraudsters identify a match, they take advantage by logging in to numerous websites using proven credential combinations to test what works and then take over accounts to make unauthorised purchases. These kind of account takeovers or credential stuffing attacks are the fastest-growing type of bot-driven cyber attacks fraudsters use today.

To make matters worse, more than 86 per cent of Australia's top websites can't detect bots. These websites fail to detect the difference between web browser and scripted attacks, meaning that an attacker could also load the login page with different malicious tools, attempting to log in repeatedly using stolen credentials.

The question becomes, “Can you trust your online users are who they say they are?” Cyber criminals often deploy attacks that leverage multiple attack vectors, from bots and identity spoofing to remote access trojans. They go with what works.

The outlook may sound bleak, but there are effective mitigation strategies that businesses can deploy to detect and block this kind of fraudulent behaviour.

What is behavioural biometrics?

Behavioural biometrics focuses on how a user interacts with their devices. On a desktop, behavioural biometrics track how a keyboard is used, the typing speed and the use of special keys. It also enables us to track how the mouse moves around the page and how the user interacts with mouse buttons.

Evolving technology in mobile devices has also helped to implement better behavioural biometrics technology. This gives the ability to track whether the phone is in landscape or portrait position, the usual rotation and tilt of the phone, how users engage the touchscreen, the swipe speed, the shape of a user’s finger or pointer or if the user is applying their typical amount of pressure.

It is important to note that this technology works within the data sharing permissions of the device. It doesn’t need to capture every alphanumeric key action. It does not capture any personally identifiable information (PII), password or secure information. Instead, it looks at the use of special characters, how the consumer engages with those characters, such as the key inputs fluidity and how they entered information.

Passively detect trusted and fraudulent behaviours

All this information can be used to determine specific actions and behaviours. When interrogated with data models and machine learning techniques, it can reinforce good, expected patterns of behaviours and identify anomalous, divergent patterns of behaviour not tied to expected, innate, human and trusted behavioural indicators.

Lifting the lid on behavioural biometrics

Behavioural biometrics data is collected through a webpage profiling feature. Running asynchronously, it processes information in linear time frame – providing rich data points on both desktop and mobile devices.

One example of how this data can be interrogated with machine learning is through the identification of trust versus risk behavioural characteristics that uses a scoring system across several core fraud threats.

Users can:

  • generate bot scores that identifies non-human interaction and the use of automated bots by listening to behavioural signals that identify common, repeatable and automated behaviours;
  • generate a social engineering score which measures the likelihood that a fraudster is influencing the user’s behaviour; and
  • access an anomaly score that generates behavioural profiles to indicate the amount of variance that the current session's behavioural fingerprint has from the historical behavioural fingerprint seen for the user.

If the interaction is confirmed as human, the fraud score can indicate the likelihood that the human is a fraudster. The higher the score, the more likely they are a fraudster. This scoring model is trained over time as behaviours shift and evolve.

Finally, users can see an overall score that considers all these behavioural indicators and computed based on automated bot, anomaly, fraud, social engineering and system access detection.

The future of fraud prevention

Behavioural biometrics provides a new, faster way of verifying transactions, whether relating to new account applications, new service registrations or website activity. From e-commerce and financial services to gaming, gambling and media, the tool pushes fraud prevention to a new level.

As behavioural biometrics evolves, a new balance of risk-based friction for digital transactions will come with it that will take into account security measures at the moment of the transaction and hardening fraud controls but with a friction-appropriate experience for consumers.

Cameron Church is the director of fraud and identity, LexisNexis Risk Solutions.

cd intro podcast

Introducing Cyber Daily, the new name for Cyber Security Connect

Click here to learn all about it
newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.