Cybersecurity experts harness AI to safeguard mobile apps against emerging threats

  • Cybersecurity developers are using AI to build code to mitigate threats like spyware on mobile apps.
  • A product manager said effective AI tools should be trained on “high-quality datasets.”
  • This article is part of “Build IT,” a series about digital tech and innovation trends that are disrupting industries.

Thanks for signing up!

Access your favorite topics in a personalized feed while you’re on the go.

Mobile apps are ubiquitous parts of our lives. As their foothold in society strengthens, so does their susceptibility to cyberattacks.

With new app security threats emerging, cybersecurity professionals and developers are turning to artificial intelligence to improve the development, rollout, and effectiveness of security fixes.

Cybersecurity flaws in mobile apps

Jake Moore, the global cybersecurity advisor at ESET, told Business Insider that the biggest cybersecurity threats affecting mobile apps are data leaks, spyware, and phishing attacks.

He said poor data-privacy safeguards contribute to these problems and could cause leaks of sensitive information. He added that spyware campaigns such as Pegasus often target smartphone users, imploring them to open links in emails or text messages that expose them to dangerous software and viruses.

Outdated operating systems and software downloaded from third-party apps present risks as well. “They may contain malicious applications designed to steal data or spy on the device,” Moore said.

Jimmy Desai, a consultant commercial solicitor at Keystone Law specializing in data protection, said cybersecurity incidents could also happen when someone loses their device. “People often use their mobiles for both work and social purposes, and so this can cause problems from a practical and legal point of view,” he said.

How cybersecurity professionals can use AI to boost mobile-app security

While mobile-app cybersecurity threats are growing in complexity and scale, advances in AI can provide practical solutions.

Moore said advanced AI algorithms could help cybersecurity experts identify and mitigate malware, phishing attacks, and other threats before they affect the user. He argued that because a treasure trove of data powers AI, the technology will continue to learn, improve, and ultimately make mobile devices safer.

These AI tools, Moore said, can “detect patterns and anomalies” that indicate malicious activity and can outperform traditional security measures. “This is particularly crucial in the fast-evolving landscape of mobile applications, where new threats evolve or emerge constantly,” he added.

Candid Wüest, the vice president of product management at Acronis, said AI could help cybersecurity professionals understand how secure an app’s life cycle is. He told BI that coding platforms such as GitHub’s Copilot tool use AI to help software developers design robust, secure code for mobile apps.

One of Copilot’s features, Wüest said, can determine whether code a developer writes or changes would introduce new threats on mobile devices. He added that such tools could help ensure mobile-app code is secure and constantly tested for cybersecurity flaws.

Wüest said AI could also help detect anomalies in users’ in-app activities and identify fraud. “For example, if a user for a loyalty app is logging in 100 times per hour and directly jumping to the ‘submit’ page for a contest and submitting their ID to win a prize, then it’s probably a bot trying to cheat and win,” he said.

Christian Schläger, the cofounder and CEO of the app-protection service Build38, said AI is also helping mobile-app developers implement countermeasures to maximize security “while minimizing the impact on user experience.”

Wüest said his biggest recommendations for effectively using AI in mobile cybersecurity would be to collect and use “high-quality datasets for training,” continuously update AI models to adapt to new threats, and integrate AI into security tools to strengthen their defenses.

The pitfalls of using AI in mobile-app cybersecurity

Moore said that AI isn’t without flaws and that there’s a “relatively high” chance it will make mistakes. AI algorithms, he said, could generate false positives because of discrepancies in the datasets used to train them, causing developers to mistake “legitimate activities for threats.”

He added that cybersecurity professionals should train AI algorithms on “comprehensive, diverse, unbiased, and up-to-date datasets” and merge AI with existing security infrastructure in a way that “complements and enhances their current tools and processes rather than replacing them.”

Schläger said another big issue is that cybercriminals use AI to reverse-engineer mobile apps to understand how they work and “develop new attack scenarios.”

This can exacerbate privacy breaches. But Wüest told BI that developers could mitigate harm through data anonymization, more-diverse training datasets, and persistent data monitoring. He added that developers should design AI tools to learn continuously, as well as “develop ethical guidelines and verify local laws to address AI’s use in cybersecurity.”

How users can improve their apps’ security

Moore said people could improve security on their phones by securing accounts with unique passwords, setting up multifactor authentication, backing up data, and regularly updating software.

Both Moore and Schläger said that using private WiFi networks, as opposed to public hot spots, is also a good safeguard, especially when conducting sensitive business and transactions.

“Awareness and vigilance are key to protecting personal information from hackers and cybersecurity threats,” Schläger said.

Read More

Nicholas Fearn