What We Can Learn About Mobile Security from an Automated Android Application Testing Tool

A Deep Dive into ThirdEye and What Researchers Found

In this post, I present my thoughts and learnings from a research paper about ThirdEye, an automated Android application testing tool that was created by the paper’s authors. While this tool does not appear to have been released to the public at the time of publication, the authors came across some interesting findings while running the tool that are worth sharing.

First, it’s relevant to discuss the threat model the authors used, since it informs how the various issues they discover are evaluated. The three main threat types they identify are the on-path network attacker, the co-located app attacker, and the device-owner attacker.

Threat Type #1: On-Path Network Attacker

The on-path network attacker is the classic “evil Wifi” attack, where the attacker controls some piece of networking hardware between the user and the internet and uses this location to capture all the traffic the user is sending. This tactic is most effective at reading unencrypted HTTP communication, though in some cases encryption can be broken or reversed.

The authors of the paper use some really aggressive tactics to proxy and decrypt the data, in excess of the real capabilities of an on-path attacker. These tactics are a common approach used in analyzing mobile applications, which we should keep in mind when evaluating the seriousness of the findings.

Threat Type #2: The Co-Located App Attacker

The co-located app attacker has a lot less prevalence in the current state of the art – it hypothesizes a malicious application running on the same device as the target, exploiting whatever inter-process communication capabilities are present in the application and operating system to spy on and interfere with the target application. This kind of attacker used to have a lot more capacity, but successive revisions to the Android OS have greatly reduced the attack surface available from this perspective. It is something I always evaluate, but the possibilities are much more limited than they once were.

The two main areas of weakness are the application’s intentionally-exposed inter-process communication interfaces – intents, broadcast receivers, etc. – and any data being stored in non-protected areas of the device file system. Due to their focus on sensitive data exposure, the authors don’t interact with normal IPC interfaces very much.

Threat Type #3: The Device-Owner Attacker

The device-owner attacker threat type considers the possibility of an attacker analyzing a freely-available application binary and its behaviors on a device they control. Reasonably, this method also includes the stolen-device scenario – anything an attacker can learn by analyzing their own device must be considered.

Concerns specific to this scenario revolve around anything kept secret by the application, either included within the binary itself, or established by connecting to backend servers. These kinds of attacks can also be used to supplement the network and app cases; for example, by retrieving hard-coded encryption keys that can be used to decrypt other users’ data.

Conducting Android Application Testing Within the Identified Threat Types

Within these three threat cases, the authors analyzed several thousand Android applications for deficiencies, especially focused on failures to protect secret information. Information about the device, its location, and its user were all tracked, as well as correlating information to help automatically identify multiple impressions of the same user. As a user of mobile devices in my personal life, I always find myself a little perturbed by the level of information-gathering employed in the name of advertising analytics; this paper was the first time I’d heard of https://wigle.net, an openly-available database of geolocated Wifi SSIDs. The existence of that site means that any application collecting the Wifi SSIDs near the device is potentially collecting enough information to geolocate the user without direct GPS access!

I was not at all surprised to hear that faulty encryption practices are extremely prevalent in the mobile space; one of the first things I check for in any mobile assessment is whether the application is attempting to encrypt anything, and it’s very unusual to see sensitive data being encrypted correctly. Indeed, the authors found the following:

  • 2,887 of the 12,598 (22.92%) applications they assessed were performing some kind of custom encryption
  • 2,421 of those 2887 (83.86%) applications were using some kind of fixed or predictable key
  • Finally, 262 of the 2887 (9.08%) applications were using a deprecated algorithm like DES.

Educating developers on good encryption practices is always advisable, and so is having any encryption functionality evaluated by a third party. There are many, many ways to fail at encryption and leave your secrets recoverable, which I discuss in another post.

The authors also go into some detail about their automated analysis process, and it’s fascinating stuff. I think there’s a lot of ideas here that I’ll be using in assessments going forward, especially in the way they propose automating the generation of Frida hooks.  Constructing Frida hooks is always a pain point when exploring application functionality, and some of the ideas they have around capturing common API endpoints related to encryption and file-system access seem like they’d be very productive in the field.

Takeaways From the Android Application Testing Analysis

Overall, I would say that mobile application owners should take stock of what analytics and other sensitive information they collect, and how that analytics data is handled. Secure storage and transport are achievable goals in this space, but they are being missed far too often.

From a red team perspective, I would say this paper proposes some very interesting ways to automate more thorough testing and inspection of mobile applications, especially regarding the handling of sensitive data.