AI Can Now Decode Your Keystrokes. Your Advisors Are Exposed.
John O'Connell2026-05-04T20:37:43+00:00Executive Summary
Artificial intelligence has introduced a cybersecurity risk that many wealth management firms have not yet addressed. AI systems can now analyze the sound of a keyboard and determine what a person is typing. A nearby smartphone, a laptop microphone, or an open Zoom meeting can capture these sounds. Machine-learning models can then infer the keystrokes with reported accuracy rates as high as 95 percent.1
Your advisors type passwords, account numbers, move-money amounts, and trading instructions as part of their daily workflow. Hybrid work, remote collaboration, and advisor travel increase the likelihood that keystroke audio is captured in environments filled with active microphones.
Understanding the Risk
An acoustic side-channel attack uses the sound of keystrokes to determine what a person types. Each key produces a slightly different sound based on its position and how it is pressed. These differences are subtle, but AI models trained on thousands of samples can recognize the patterns and map them to specific characters.
A recent academic study demonstrated that a smartphone placed next to a laptop could record keystrokes and allow an AI model to identify characters with approximately 95 percent accuracy. Even audio captured through Zoom calls yielded accuracy above 90 percent.1 2 The attacker does not need access to the firm’s systems. They only need access to the audio stream.
Your advisors routinely type sensitive information while participating in virtual client meetings, joining internal calls, working from home offices, traveling through airports or staying in hotels, and attending conferences. Modern work environments are filled with microphones: laptops, phones, tablets, headsets, and conference systems. If sensitive information is typed while those microphones are active, the sound can be captured and analyzed. For a regulated industry built on confidentiality and client trust, this is a meaningful operational exposure.
How Your Firm Can Protect Itself
- Train Your Advisors on Location Awareness and Safe Behavior
Training is the most effective first step. Your advisors must understand that typing in public or semi-public environments exposes sensitive information because they cannot control nearby microphones.
Make sure training covers the basics: no passwords or client data entered in airports, hotels, lounges, conference centers, or rideshares. No move-money instructions or trade details entered while traveling. No account numbers typed during virtual meetings unless the advisor is in a private, controlled location.
The most practical habit to reinforce is simple: mute the microphone before typing anything sensitive.
- Use Password Managers to Eliminate Sensitive Keystrokes
Password managers are the single most effective defense against this risk. They eliminate the need to type credentials manually, which removes the keystroke audio that AI systems can analyze.
A password manager autofills login credentials without typing, works consistently across environments, reduces repetitive password entry, and strengthens overall password hygiene. If a password manager can fill the field, your advisors should not type it manually.
- Strengthen Remote-Work and Meeting Policies
Policies should clearly state that your advisors may not type sensitive information while their microphone is active during virtual meetings. Limit sensitive data entry to private environments. No one should process custodial logins, ACAT transfers, trade entries, or money movement while traveling.
Write them in plain English and reinforce them in training.
- Review Conferencing Vendors and Audio Retention Practices
Many conferencing platforms store meeting audio to generate transcripts or summaries. These recordings may contain keystroke sounds and should not be retained longer than necessary.
Confirm that audio is deleted once transcript accuracy is verified, that vendors do not retain raw audio indefinitely, and that audio is not used for secondary purposes or AI model training without explicit consent.
- Update Incident-Response Playbooks
Acoustic attacks leave no software footprint. If a credential compromise or account issue occurs, your incident-response team should consider whether keystroke audio may have been captured. Reviewing conferencing logs, meeting history, and recent travel activity expands investigative awareness.
What You Should Do Now
AI has created a new way for sensitive information to leak through everyday tools your advisors use without thinking. This is not a technology failure. It is an operational leadership issue.
You need to ensure your advisors understand where they can safely work, when they should avoid typing sensitive information, and why password managers are critical. Confirm that your conferencing vendors handle audio responsibly and delete it when no longer needed.
This does not require a system overhaul. It requires clear expectations and consistent reinforcement. Act now and you protect your advisors, protect your clients, and stay ahead of a risk that will only expand as AI capabilities evolve.
Subscribe to the Peaks Perspective Newsletter.
Join our newsletter to get topics like this delivered straight to your inbox every month!
Subscribe Now
Endnotes
1 Nassi, Ben, et al. “A Practical Deep Learning-Based Acoustic Side Channel Attack on Keyboards.” arXiv, 2023, https://arxiv.org/abs/2308.01074. Accessed 15 Feb. 2026.
2 Simonite, Tom. “AI Can Hear What You Type.” IEEE Spectrum, 2023, https://spectrum.ieee.org/keyboard-acoustic-side-channel-attack. Accessed 15 Feb. 2026.