Robocalls are rampant, using AI and other tools to disrupt day-to-day life and scam Americans out of their money through impersonations of family members, phone providers and more. On October 24, the Senate Commerce Committee’s Subcommittee on Communications, Media, and Broadband heard the latest issue and solution floating around: AI.
Currently, bad actors are using AI to steal people’s voices and repurpose them in calls to loved ones — often presenting a state of distress. This advancement goes beyond seemingly real calls from banks and credit card companies, providing a disturbing and jarring experience: not knowing if you’re speaking to someone you know.
The financial repercussions (not to mention potential mental distress) are tremendous. Senator Ben Ray Luján, chair of the subcommittee, estimates that individuals nationwide receive 1.5 billion to 3 billion scam calls monthly, defrauding Americans out of $39 billion in 2022. This figure is despite the Telephone Robocall Abuse Criminal Enforcement (TRACED) Act of 2019, which expanded the government’s power to prosecute callers and for individuals to block them.
In fact, much of the blame for this continued issue has been collectively placed on government agencies like the Federal Communications Commission (FCC). “FCC enforcement actions are not sufficient to make a meaningful difference in these illegal calls. U.S.-based providers continue to spurn the Commission’s requirements to respond to traceback requests,” Margot Saunders, a senior attorney at the National Consumer Law Center, said in her testimony to the subcommittee. “The fines issued against some of the most egregious fraudsters have not been recovered, which undermines the intended deterrent effect of imposing these fines. Yet the Commission has referred only three forfeiture orders to the Department of Justice related to unwanted calls since the FCC began TRACED Act reporting in 2020.”
Saunders called on the FCC to issue clearer guidance on existing regulations and harsher penalties (namely suspension) on complicit voice service providers. She further expressed the need for explicit consent requirements in order for individuals to be contacted.
Mike Rudolph, chief technology officer at robocall-blocking firm YouMail, pitched the idea of using AI to flag insufficient information in the FCC’s Robocall Mitigation Database. Instead of properly completing and filing the required information, some phone providers avoid accountability for their (lack of) action and submit blank or irrelevant papers.