Your Personal Attack Frequency
Your Personal Attack Frequency

Your Personal Attack Frequency

Tags
Owner
Justin Nearing

There is no defense when your digital fingerprint is used as AI training data.

The headline reads:

New acoustic attack steals data from keystrokes with 95% accuracy:

Highlights include:

A team of researchers from British universities has trained a deep learning model that can steal data from keyboard keystrokes recorded using a microphone with an accuracy of 95%.

What makes this so bad is that everyone has a unique keyboard entry fingerprint.

We all have a unique rhythm and groove to express ourselves on a keyboard.

Learning to change these patterns is slow, because typing is in a large part muscle memory.

Changing that is slow and intentional work, especially for veteran keyboard users.

Just as there is a personal keyboard usage pattern that can be detected, all content made by humans can be fed into a detection algorithm.

The same training that makes ChatGPT so effective can be applied to deanonymizing you on the internet.

How many millions would an attacker spend on compute if you could decode Elon typing out his eX-Twitter password?

Humans greatest weapon was pattern matching. We conquered the world in only 160 000 years.

Part of that was because out species out-pattern-matched every other competitor.

We’ve built math machines that can out-pattern-match us.

💡
Podcast Recommendation:

In this three hour marathon, Dan Carlin unfolds the intricate build up to the first World War.

In it, he makes the observation that WWI was the moment that homo sapiens crossed a threshold: When our weapons were able to kill us at scale.

From there the scale went exponential.

We’ve scaled sophisticated math models to the point where we can detect the music of your keyboard.

Defensive Weaponized AI

A similar attack vector to the acoustic keyboard keylogger can be found in the world of competitive gaming.

Except it’s being used as a way to prevent cheating:

The proposition is to run a AI model that pattern-match a players unique playstyle- Down to the key/mouse stroke.

Compare that players patterns to a cheaters, and you can detect if the player is cheating.

What I find interesting about this is that its using the same approach as the acoustic keyboard password stealer.

It’s using weaponized AI in a defensive capacity.

In both cases, pattern matching a digital fingerprint is a profitable endeavor.

  • For the rogues, cracking the digital blueprint of High Value Targets could do billions in damage.
  • For the benevolent, a game studio would pay high subscription costs to reliably detect and track cheaters.

If there is an economic incentive in identifying digital fingerprints, then the investment will be made- especially since it has the bright shiny letters AI in it.

Building defensive applications of AI as a profitable business venture is likely the best way to inoculate ourselves from machines that can out-pattern-match us.

It’s more than “Can’t beat ‘em, join ‘em”.

It’s “Can’t beat ‘em, profit off ‘em”.

At least I hope that’s a viable option, because I posit that conventional digital security measures are becoming more and more obsolete as we scale AI.

No Safe Password

Consider the following passage from Google Cloud: Workload identity federation

image

In server architecture, a service account key is a fancy term for “long password”.

We usually don’t type these passwords out, but they certainly do get copypasta’d.

What this acoustic keyboard attack demonstrates is that no password is safe- even if you call them service keys.

It also demonstrates that Google already has a proposed solution: Tie your credentials to the person.

That IAM thing it’s referencing is what grants server privileges to software engineers working at a company.

💡
On the technical side, workload identity federation is where you run a server that exchanges your companies existing login structure for a short-lived generated Google cred. It basically connects your OKTA auth to your GCP IAM privileges, tying the ‘you’ at work to the ‘you’ on GCP.

Who Are You?

If you lose access to Google/Microsoft/Apple account, you’re probably going to have a bad day.

Especially if you lose access because someone impersonated you.

That’s why Google and every other major platform puts so much effort into authenticating who a person is.

And most smaller companies defer to logging in with one of those major platforms:
image

At the end of the day, Google still needs to prove that its you that’s logging into that account, and not an impersonator- human or bot.

My argument is that targeted AI algorithms can be instructed to impersonate you with continually improving accuracy.