An Overview of Application Security

Mohamed Abdul-Hameed

Mobile best practices to keep user data safe.

In today’s world, users trust hundreds of applications with their data. This puts a great responsibility on the shoulders of software engineers building and working on these applications.

In this article, I will talk about some of the important practices to improve the security of your apps and make sure your users’ data is always safe.

Most of the things I am going to cover are applicable for all kinds of applications, but I’ll be referring to iOS from time to time.

Application Security

Threat Modeling

Threat modeling is a continuous process where we think about what can go wrong in the context of security. It’s important because it allows you to consider what risks you face and what you can do to mitigate them.

Developers, designers, program managers and testers can participate in building the threat model so that the team members involved in the project share a common understanding.

The resulting threat model should be a reference for the team when making decisions.

💡 Microsoft has a tool to help building threat models, it’s worth taking a look at. The below embed also provides a good visualization of a threat model.

Let’s go over each of the steps in more detail:

Define

We start off by understanding the application we’re building by defining the assets and potential attackers.

💡 As a general rule, if a user was required to grant access, it’s an asset. This includes for example, camera/microphone access, location data, contact information, or files.💡 Attackers can be criminals who want to steal money from your users, or steal their data and blackmail them. They can also be competitors or romantic partners and family members.

Diagram

After defining the assets and potential attackers, and having a good understanding of what we’re building, we can draw a flow diagram of our application.

A flow diagram describes our application — where the data comes from, which parts of the application the data is processed in, and the data journey until it reaches the UI.

In a standard flow diagram, we’ll see the data flowing from the database and servers. The data will then be processed or parsed by certain entities in our application and can then be prepared to be displayed to the user.

Drawing this diagram helps as it visualises your application, making it easier to diagnose and talk about in a team setting.

We’ll start by drawing out the data stores in our system and the system resources we leverage.

💡 Data stores are where we store our data. These can be our servers, the local cache, or iCloud, for example.

We should also think about the other ways data might be able to make its way into our application —for example through deep links.

💡 Deep links are a special kind of link that's opened inside an app instead of in a browser. In the Apple world, they're usually called Universal links. For more information about them you can refer to the Apple documentation.

Identify

After drawing the diagram, we try to identify the highest risk areas in our application. These are the parts of the application that would most directly work with attacker-controlled data in the case of a security breach.

After identifying these areas we start to draw boundaries on the diagram between them and the rest of the application.

Let’s look at a couple of examples:

  • If the application we’re building supports deep links, we should draw a boundary between the deep link handler (the part of the application that is responsible for handling and parsing deep links) and the rest of the application. When processing a deep link, there’s the possibility that this deep link carries dangerous data with it, as it’s completely in the sender’s control.
  • If the application fetches data from the server, we should draw a boundary between the networking client (the part of the application that is responsible for fetching and parsing the data from the server) and the rest of the application. When communicating with the server, there is a possibility that the data we are receiving back was altered by a Man-In-The-Middle attack (MITM) or that our servers were hacked.

The areas where boundaries are drawn between them and the rest of the application, are where we need to be extra careful when implementing.

These are the areas we’re also going to think about in detail in the next step.

Mitigate

After identifying the highest risk areas, we need to think about ways to make sure our user’s data is safe with us.

For example, in an application that communicates with a server, the classes that are responsible for parsing the network calls’ results are going to be identified as high risk areas in the Identify step. We need to think deeply about what might go wrong there.

In this phase, we’ll try to come up with solutions to mitigate the risks we defined in the previous step.

Validate

In this step, we evaluate what we’ve done so far and check whether our solutions hold strong or not. Based on the results of this step we may have to start the same process again. For example, if we’ve found a vulnerability or a source of input that we didn’t talk about in the previous steps.

Mobile Security Basics

This section talks about some of the security features on different mobile platforms, specifically Android and iOS.

Code Signing

In order for an application to be accepted into the Play or App Stores, it needs to be code signed with a valid certificate.

Code signing assures that an application comes from a trusted developer that, in the case of the App Store, Apple knows and is familiar with. No app on the App or Play Store can come from an unknown source.

This makes users who download applications from these platforms confident that they were developed by trusted developers, unless the device is jailbroken or rooted.

💡 Code signing doesn’t mean the code is free of security bugs, it only makes sure that the application comes from a trusted source.

Sandboxing

Sandboxing is a concept we find on both iOS and Android where each application has its own set of directories and can’t access anything outside them.

During the installation of an application on iOS, the installer creates a number of container directories for the application inside a unique home directory that’s assigned to the application.

The application can’t see anything outside these container directories. The only exception to this is using some system interfaces to access things like a user’s contact list or photos, for example.

The sandbox imposes restrictions on all the 3rd party applications on the device so that they can only access their own files and can not access features like the microphone, camera, or location without the user’s permission or by using public APIs provided by the OS.

Knowing that iOS uses sandboxing makes you, as a developer, confident that your application’s data can’t be altered by any other application on the device.

You can read more about sandboxing on Apple’s documentation here and here.

Jailbreaking/Rooting

Jailbreaking on iOS and Rooting on Android devices are two different processes that have similar intentions —removing the limitations imposed on the devices by the manufacturers.

By jailbreaking your iOS device, you will be able to install apps from outside the App Store. This means that you, as a user, will be putting yourself at risk of trusting developers you know nothing about and where there’s no trust chain to ensure they won’t harm you.

An application running on a jailbroken iOS device could, for example, gain access to the SMS database.

💡 When a user chooses to Jailbreak or Root their device, they're comprising the security of their data.

It’s worth mentioning here that there are frameworks out there to help you identify if your application is running on a Jailbroken device so that you can stop certain features.

Mobile Best Practices

This section covers some of the practices followed on Android or iOS to improve the security of applications.

It’s always a good idea to follow best practices in my opinion, because it minimises the possibility of problems, and increases the reliability of what you’re building. Best practices have been through a lot of experimentation before and are mature.

Some of the following are iOS only features and some of them are applicable for both Android and iOS.

Safe Storage

When the user sets an active passcode on their device, an iOS feature called Data Protection is automatically enabled.

With this feature, the application will still be able to read and write data normally, but all the files are encrypted and decrypted behind the scenes by the OS.

There are different protection levels that you can read more about on Apple’s documentation about Data Protection.

Apple also provides a framework called Security framework that’s specialized in security-related features such as encrypting, decrypting, and signing.

If you are using a database and would like to add an extra layer of security, you can do that using the Security framework to encrypt the database.

If you ever needed to store sensitive data in your application, like authorization tokens for example, make sure you use Keychain.

iOS also takes an extra step when it comes to security. It has a hardware-based key manager called Secure Enclave that’s isolated from the main processor.

The way Secure Enclave works is that we instruct it to create a key, it stores the key and we start performing operations with it. The key will never go outside the Secure Enclave, it’s not even read to memory, which makes it extremely secure.

Apple gives an overview of the Secure Enclave here and provide us with a detailed explanation about the way it works and how we can use it here.

App Transport Security (ATS)

In iOS 9 Apple introduced a security feature called App Transport Security (ATS) which goal is preventing insecure network connections or at least make deliberate decisions on when to use them by adding some conscious exceptions only when needed as explained in Apple’s documentation about the topic.

When ATS is enabled, the iOS client will not only perform the default trust evaluation checks when communicating with a server but also some extended security checks. This will make the communication between the client and the server safer and and more secure.

You can take a look at how Apple displays how ATS prevents insecure connections on their documentation.

💡 ATS is only enforced by the OS when you use the standard URL Loading System, it won’t be enforced if you used CGNetwork framework to do network calls.

SSL Pinning

Even though we can use ATS to enforce the use of HTTPS and make sure our connections are secure, there’s a risk of 3rd parties viewing the data while it’s being exchanged with the server.

When trying to communicate with the server, we do a handshake. In this handshake the server sends a certificate. Attackers can set up Man-In-The-Middle attacks using their own certificates that can be signed by trusted certificate authorities, or even self-signed certificates. This way, the application’s connections will be redirected to the attacker’s servers instead of your servers.

The idea of SSL pinning is that we know we’re going to communicate with a specific server, so we pin that server’s certificate or public key to the client. It’s burned into the code and complied with.

There are different ways to implement SSL pinning and each way has its pros and cons. You can find more information about that in this article.

For more information about SSL Pinning, I recommend watching this video which provides detailed explanation of how certificate validation works, the problems with it and how SSL pinning comes to the rescue.

Code Obfuscation

We can’t prevent reverse engineering completely, but we can make it harder.

To do that, a practice of making the code difficult to understand is followed — this is called code obfuscation.

It involves obscuring parts or all of the code, renaming variables, methods, protocols, structs, and classes to meaningless names, and possibly adding in unused or meaningless code. This makes it extremely hard to understand anything from the code.

It is very important to do the code obfuscation step by ourselves if we’re using Objective-C. The reason is that Objective-C runtime needs to have access to all the symbols so that it can reference them by strings.

That’s not the case for Swift. The Swift compiler strips out symbols and does a lot of optimization that leads into a much harder to understand code. However, you can still take things further and obfuscate your Swift code if you want to.

Anti-patterns

Here I’ll identify and explain some of the common bad practices that a lot of us has fallen into.

It’s important to be aware of these patterns, avoid them, and keep an eye open for them, especially during code reviews.

Path Traversal

This is one of the famous anti-patterns and also one that a lot of us have implemented in the past.

Imagine a scenario where you receive a file from the server along with another parameter to use as a file name. But, what if you received the following: ../../Library/Foo?

In this case, the sender, which may be a Man-In-The-Middle attack, will be able to decide where to store the file you received. The sender was able to traverse back in the folders hierarchy and choose where to store the file.

This way, an attacker may be able to override sensitive data — your application’s database for example!

As a general rule, never use remotely controllable parameters in file paths. And if you have to, use the last path component and validate it against the traversing characters . and ...

💡 Generally, try to avoid this pattern if you can and always try to use a locally generated random file name, a UUID for example, instead.

Uncontrolled Format Strings

Most programming languages provide a way to format strings. In C for example, you can use:

printf(“Your age is %d.”, age); // Your age is 44.

Imagine a scenario where you are using a format string that you receive from the server. Which, again, is untrusted data that might be sent by an attacker (for example, Man-In-The-Middle).

The attacker may send:

Your age is %d and I’m leaking some memory: %lx%lx%lx%lx%lx

This will result in something like:

Your age is 44 and I’m leaking some memory: 7ffeef78bb207ffeef78bb30 7ffeef78bb207ffeef78bb3bbe000

That exposed some application’s data as you can see!

💡 Never use or generate a format string specifier from remote data.

It’s important to think about security when building your iOS application

Building a threat model for you application in the early stages can help you build more secure applications as it will give you the chance to implement security into the design as early as possible.

Knowing the possibilities on the platform you’re writing applications for is crucial to building secure apps.

Following best practices and becoming aware of anti-patterns to avoid is very important and should be taken into account during code reviews and technical decisions.

If you’re about to start building a new application, make sure you build a Threat Model first, follow the best practices and avoid the anti-patterns we talked about.

If you have existing applications, it’s worth it to build a Threat Model for them and make sure your user’s data is safe.

Additional Resources

For more information about the different topics highlighted in the article, I highly recommend watching this WWDC 2020 talk.

It explains a lot of concepts related to application security in a very clear way and it builds a complete Threat Model. It also talks about more anti-patterns and best practices that I’m sure will help you write better applications.

I also like Microsoft’s Security Development Lifecycle (SDL). It explains many concepts and describes some of the procedures Microsoft has been following for years now to ensure the applications they build are secure.