fbpx
AI/Machine Learning

AI is now being used by cybercriminals to listen to your typing for passwords

- August 10, 2023 3 MIN READ
Maid eavesdropping on telephone call
Photo: AdobeStock
Be careful who you type your passwords in front of: cyber criminals can figure out which keys you’re pressing simply by getting a carefully trained AI model to listen to you type – and the technique even works over phone calls and Zoom sessions.

Documented in a newly published paper, the acoustic side channel attack (ASCA) involves recording the sound of a keyboard, either by using a nearby smartphone or over a remote conferencing session, as it is being used to type data.

Each key, it turns out, has a slightly different sound whose subtleties may not be discernible to the human ear, but can be picked up when the sound is digitised and analysed by a carefully trained AI model.

In this case, researchers used a stock iPhone 13 to record the sound of the Apple MacBook Pro 16-inch laptop keyboard at standard 44.1kHz quality.

Audio data was converted into visual mel-spectrograms, which were then fed into a deep learning AI classifier that compared the data visualisations to training images that map the sounds of known keypresses.

The technique – created by a team of British academics including recent Durham University graduate Joshua Harrison, University of Surrey software security lecturer Ehsan Toreini, and Royal Holloway University of London’s Dr Maryam Mehrnezhad – was able to determine which keys were pressed with 95 per cent accuracy when the sound of the typing was recorded using a smartphone.

The method was 93 per cent accurate when the typing sounds were recorded using Zoom videoconferencing software’s built-in recording option – suggesting that online meeting participants could snoop on the passwords, notes, and other data that non-muted participants typed during the meeting.

“Recording in this manner required no access to the victim’s environment and did not require any infiltration of their device or connection,” the team noted.

Laptops are more susceptible to ASC attacks than desktops because they are often moved between environments where someone could easily listen to the keyboard’s sounds, such as at a library, coffee shop, or study space.

The researchers simulated this by resting their iPhone on a desk, on top of a microfibre cloth to dampen vibrations, just 17cm away from the laptop.

“Laptops are more transportable than desktop computers and therefore more available in public areas where keyboard acoustics may be overhead,” the researchers said, warning that “with recent developments in deep learning, the ubiquity of microphones and the rise in online services via personal devices, ASC attacks present a greater threat to keyboards than ever.”

Your typing is your password

The findings are the latest weakness in an era where cyber criminals use keyloggers to harvest sensitive data – and employers like IAG have been caught using similar tools to monitor employee productivity and, in a recent case, support an employee’s dismissal.

Researchers have long explored ways to conduct side channel attacks on monitorsprintersCPUs3D printerswireless keyboards, and other devices.

Yet keyboards are a universal and, the researchers noted, rarely protected target that is regularly used to interact with sensitive systems and enter sensitive data.

“The ubiquity of keyboard acoustic emanations makes them not only a readily available attack vector,” the researchers warn, “but also prompts victims to underestimate (and therefore not try to hide) their output.”

“Uniformity” in laptop design – all models of a particular laptop tend to use the same keyboards – means that once an AI model has been trained to recognise the sounds of a particular model laptop, the researchers said, “should a popular laptop prove susceptible to ASC attacks, a large portion of the population could be at risk.”

Potential victims can defend themselves relatively easily, with the authors noting that switching to touch typing reduced recognition accuracy considerably – as did using passwords with multiple cases: the AI model can pick up the sound of a Shift key being pressed, but cannot detect when the key is released because of the noise from the other keys.

Other options include playing music or sounds to hide the keyboard sounds, or using software to mix white noise and fake keystrokes into the transmitted audio.

With microphones now embedded in smartphones, smart watches, laptops, webcams, smart speakers, and other devices, physically avoiding them has become all but impossible – occasioning more research into ASCAs and their countermeasures.

“With the recent developments in both the performance of (and access to) both microphones and deep learning models,” the researchers note, “the feasibility of an acoustic attack on keyboards begins to look likely.”