Quantcast
Channel: Android Explorations
Viewing all 50 articles
Browse latest View live

Exploring Google Wallet using the secure element interface

$
0
0
In the first post of this series we showed how to use the embedded secure element interface Android 4.x offers. Next, we used some GlobalPlatform commands to find out more about the SE execution environment in the Galaxy Nexus. We also showed that there is currently no way for third parties to install applets on the SE. Since installing our own applets is not an option, we will now find some pre-installed applets to explore. Currently the only generally available Android application that is known to install applets on the SE is Google's own Google Wallet. In this last post, we'll say a few words about how it works and then try to find out what publicly available information its applets host.

Google Wallet and the SE

To quote the Google Play description, 'Google Wallet holds your credit and debit cards, offers, and rewards cards'. How does it do this in practice though? The short answer: it's slightly complicated. The longer answer: only Google knows all the details, but we can observe a few things. After you install the Google Wallet app on your phone and select an account to use with it, it will contact the online Google Wallet service (previously known as Google Checkout), create or verify your account and then provision your phone. The provisioning process will, among other things, use First Data's Trusted Service Manager (TSM) infrastructure to download, install and personalize a bunch of applets on your phone. This is all done via the Card Manager and the payload of the commands is, of course, encrypted. However, the GP Secure Channel only encrypts the data part of APDUs, so it is fairly easy to map the install sequence on a device modified to log all SE communication. There are three types of applets installed: a Wallet controller applet, a MIFARE manager applet, and of course payment applets that enable your phone to interact with NFC-enabled PayPass terminals.

The controller applet securely stores Google Wallet state and event log data, but most importantly, it enables or disables contactless payment functionality when you unlock the Wallet app by entering your PIN. The latest version seems to have the ability to store and verify a PIN securely (inside the SE), however it does not appear it is actually used by the app yet, since the Wallet Cracker can still recover the PIN on a rooted phone. This implies that the PIN hash is still stored in the app's local database.

The MIFARE manager applet works in conjunction with the offers and reward/loyalty cards features of Wallet. When you save an offer or add a loyalty card, the MIFARE manager applet will write block(s) to the emulated MIFARE 4K Classic card to mirror the offer or card on the SE, letting you redeem it by tapping your phone at a NFC-enabled POS terminal. It also keeps an application directory (similar to the standard MIFARE MAD) in the last sectors, which is updated each time you add or remove a card. The emulated MIFARE card uses custom sector protection keys, which are most probably initialized during the initial provisioning process. Therefore you cannot currently read the contents of the MIFARE card with an external reader. However, the encryption and authentication scheme used by MIFARE Classic has been broken and proven insecure, and the keys can be recovered easily with readily available tools. It would be interesting to see if the emulated card is susceptible to the same attacks.

Finally, there should be one or more EMV-compatible payment applets that enable you to pay with your phone at compatible POS terminals. EMV is an interoperability standard for payments using chip cards, and while each credit card company has their proprietary extensions, the common specifications are publicly available. The EMV standard specifies how to find out what payment applications are installed on a contactless card, and we will use that information to explore Google Wallet further later.

Armed with that basic information we can now extend our program to check if Google Wallet applets are installed. Google Wallet has been around for a while, so by now the controller and MIFARE manager applets' AIDs are widely known. However, we don't need to look further than latest AOSP code, since the system NFC service has those hardcoded. This clearly shows that while SE access code is being gradually made more open, its main purpose for now is to support Google Wallet. The controller AID is A0000004762010 and the MIFARE manager AID is A0000004763030. As you can see, they start with the same prefix (A000000476), which we can assume is the Google RID (there doesn't appear to be a public RID registry). Next step is, of course, trying to select those. The MIFARE manager applet responds with a boring 0x9000 status which only shows that it's indeed there, but selecting the controller applet returns something more interesting:

6f 0f -- File Control Information (FCI) Template
84 07 -- Dedicated File (DF) Name
a0 00 00 04 76 20 10 (BINARY)
a5 04 -- File Control Information (FCI) Proprietary Template
80 02 -- Response Message Template Format 1
01 02 (BINARY)

The 'File Control Information' and 'Dedicated File' names are file system-based card legacy terms, but the DF (equivalent to a directory) is the AID of the controller applet (which we already know), and the last piece of data is something new. Two bytes looks very much like a short value, and if we convert this to decimal we get '258', which happens to be the controller applet version displayed in the 'About' screen of the current Wallet app ('v258').


Now that we have an app that can check for wallet applets (see sample code, screenshot above), we can verify if those are indeed managed by the Wallet app. It has a 'Reset Wallet' action on the Settings screen, which claims to delete 'payment information, card data and transaction history', but how does it affect the controller applets? Trying to select them after resetting Wallet shows that the controller applet has been removed, while the MIFARE manager applet is still selectable. We can assume that any payment applets have also been removed, but we still have no way to check. This leads us to the topic of our next section:

Exploring Google Wallet EMV applets

Google Wallet is compatible with PayPass terminals, and as such should follow relevant specifications. For contactless cards those are defined in the EMV Contactless Specifications for Payment Systems series of 'books'. Book A defines the overall architecture, Book B -- how to find and select a payment application, Book C -- the rules of the actual transaction processing for each 'kernel' (card company-specific processing rules), and Book D -- the underlying contactless communication protocol. We want to find out what payment applets are installed by Google Wallet, so we are most interested in Book B and the relevant parts of Book C.

Credit cards can host multiple payment applications, for example for domestic and international payment. Naturally, not all POS terminals know of or are compatible with all applications, so cards keep a public EMV app registry at a well known location. This practice is optional for contact cards, but is mandatory for contactless cards. The application is called 'Proximity Payment System Environment' (PPSE) and selecting it will be our first step. The application's AID is derived from the name: '2PAY.SYS.DDF01', which translates to '325041592E5359532E444446303131' in hex. Upon successful selection it returns a TLV data structure that contains the AIDs, labels and priority indicators of available applications (see Book B, 3.3.1 PPSE Data for Application Selection). To process it, we will use and slightly extend the Java EMV Reader library, which does similar processing for contact cards. The library uses the standard Java Smart Card I/O API to communicate with cards, but as we pointed out in the first article, this API is not available on Android. Card communication interfaces are nicely abstracted, so we only need to implement them using Android's native NfcExecutionEnvironment. The main classes we need are SETerminal, which creates a connection to the card, SEConnection to handle the actual APDU exchange, and SECardResponse to parse the card response into status word and data bytes. As an added bonus, this takes care of encapsulating our uglish reflected code. We also create a PPSE class to parse the PPSE selection response into its components. With all those in place all we need to do is follow the EMV specification. Selecting the PPSE with the following command works at first try, but produces a response with 0 applications:

--> 00A404000E325041592E5359532E4444463031
<-- 6F10840E325041592E5359532E4444463031 9000
response hex :
6f 10 84 0e 32 50 41 59 2e 53 59 53 2e 44 44 46
30 31
response SW1SW2 : 90 00 (Success)
response ascii : o...2PAY.SYS.DDF01
response parsed :
6f 10 -- File Control Information (FCI) Template
84 0e -- Dedicated File (DF) Name
32 50 41 59 2e 53 59 53 2e 44 44 46 30 31 (BINARY)

We have initialized the $10 prepaid card available when first installing Wallet, so something must be there. We know that the controller applet manages payment state, so after starting up and unlocking Wallet we finally get more interesting results (shown parsed and with some bits masked below). It turns out that locking the Wallet up effectively hides payment applications by deleting them from the PPSE. This, in addition to the fact that card emulation is available only when the phone's screen is on, provides better card security than physical contactless cards, some of which can easily be read by simply using a NFC-equipped mobile phone, as has been demonstrated.

Applications (2 found):
Application
AID: a0 00 00 00 04 10 10 AA XX XX XX XX XX XX XX XX
RID: a0 00 00 00 04 (Mastercard International [US])
PIX: 10 10 AA XX XX XX XX XX XX XX XX
Application Priority Indicator
Application may be selected without confirmation of cardholder
Selection Priority: 1 (1 is highest)
Application
AID: a0 00 00 00 04 10 10
RID: a0 00 00 00 04 (Mastercard International [US])
PIX: 10 10
Application Priority Indicator
Application may be selected without confirmation of cardholder
Selection Priority: 2 (1 is highest)

One of the applications is the well known MasterCard credit or debit application, and there is another MasterCard app with a longer AID and higher priority (1, the highest). The recently announced update to Google Wallet allows you to link practically any card to your Wallet account, but transactions are processed by a single 'virtual' MasterCard and then billed back to your actual credit card(s). It is our guess that the first application in the list above represents this virtual card. The next step in the EMV transaction flow is selecting the preferred payment app, but here we hit a snag: selecting each of the apps always fails with the 0x6999 ('Applet selection failed') status. It has been reported that this was possible in previous versions of Google Wallet, but has been blocked to prevent relay attacks and stop Android apps from extracting credit card information from the SE. This leaves us with using the NFC interface if we want to find out more.

Most open-source tools for card analysis, such as cardpeek and Java EMV Reader were initially developed for contact cards, and therefore need a connection to a PC/SC-compliant reader to operate. If you have a dual interface reader that provides PC/SC drivers you get this for free, but for a standalone NFC reader we need libnfc, ifdnfc and PCSC lite to complete the PC/SC stack on Linux. Getting those to play nicely together can be a bit tricky, but once it's done card tools work seamlessly. Fortunately, selection via the NFC interface is successful and we can proceed with the next steps in the EMV flow: initiating processing by sending the GET PROCESSING OPTIONS and reading relevant application data using the READ RECORD command. For compatibility reasons, EMV payment applications contain data equivalent to that found on the magnetic stripe of physical cards. This includes account number (PAN), expiry date, service code and card holder name. EMV-compatible POS terminals are required to support transactions based on this data only ('Mag-stripe mode'), so some of it could be available on Google Wallet as well. Executing the needed READ RECORD commands shows that it is indeed found on the SE, and both MasterCard applications are linked to the same mag-stripe data. The data is as usual in TLV format, and relevant tags and format are defined in EMV Book C-2. When parsed it looks like this for the Google prepaid card (slightly masked):

Track 2 Equivalent Data:
Primary Account Number (PAN) - 5430320XXXXXXXX0
Major Industry Identifier = 5 (Banking and financial)
Issuer Identifier Number: 543032 (Mastercard, UNITED STATES OF AMERICA)
Account Number: XXXXXXXX
Check Digit: 0 (Valid)
Expiration Date: Sun Apr 30 00:00:00 GMT+09:00 2017
Service Code - 101:
1 : Interchange Rule - International interchange OK
0 : Authorisation Processing - Normal
1 : Range of Services - No restrictions
Discretionary Data: 0060000000000

As you can see, it does not include the card holder name, but all the other information is available, as per the EMV standard. We even get the 'transaction in progress' animation on screen while our reader is communicating with Google Wallet. We can also get the PIN try counter (set to 0, in this case meaning disabled), and a transaction log in the format shown below. We can't verify if the transaction log is used though, since Google Wallet, like a lot of the newer Google services, happens to be limited to the US .

Transaction Log:
Log Format:
Cryptogram Information Data (1 byte)
Amount, Authorised (Numeric) (6 bytes)
Transaction Currency Code (2 bytes)
Transaction Date (3 bytes)
Application Transaction Counter (ATC) (2 bytes)

This was fun, but it doesn't really show much besides the fact that Google Wallet's virtual card(s) comply with the EMV specifications. What is more interesting is that the controller applet APDU commands that toggle contactless payment and modify the PPSE don't require additional application authentication and can be issued by any app that is whitelisted to use the secure element. The controller applet most probably doesn't store any really sensitive information, but while it allows its state to be modified by third party applications, we are unlikely to see any other app besides Google Wallet whitelsited on production devices. Unless of course more fine-grained SE access control is implemented in Android.

Fine-grained SE access control

This fact that Google Wallet state can be modified by third party apps (granted access to the SE, of course) leads us to another major complication with SE access on mobile devices. While the data on the SE is securely stored and access is controlled by the applets that host it, once an app is allowed access, it can easily perform a denial of service attack against the SE or specific SE applications. Attacks can range from locking the whole SE by repeatedly executing failed authentication attempts until the Card Manager is blocked (a GP-compliant card goes into the TERMINATED state usually after 10 unsuccessful tries), to application-specific attacks such as blocking a cardholder verification PIN or otherwise changing a third party applet state. Another more sophisticated, but harder to achieve and possible only on connected devices, attack is a relay attack. In this attack, the phone's Internet connection is used to receive and execute commands sent by another remote phone, enabling the remote device to emulate the SE of the target device without physical proximity. The way to mitigate those attacks is to exercise finer control on what apps that access the SE can do by mandating that they can only select specific applets or only send a pre-approved list of APDUs. This is supported by JSR-177 Security and Trust Servcies API which only allows connection to one specific applet and only grants those to applications with trusted signature (currently implemented in BlackBerry 7 API). JSR-177 also  provides the ability to restrict APDUs by matching them against an APDU mask to determine whether they should be allowed or not. SEEK for Android goes on step further than BlackBerry by supporting fine-grained access control with access policy stored on the SE. The actual format of ACL rules and protocols for managing them are defined in GlobalPlatform Secure Element Access Control standard, which is relatively new (v.1.0 released on May 2012). As we have seen, the current (4.0 and 4.1) stock Android versions do restrict access to the SE to trusted applications by whitlisting their certificates (a hash of those would have probably sufficed) in /etc/nfcee_access.xml, but once an app is granted access it can select any applet and send any APDU to the SE. If third party apps that use the SE are to be allowed in Android, more fine-grained control needs to be implemented by at least limiting the applets SE-whitelisted Android apps can select.

Because for most applications the SE is used in conjunction with NFC, and SE app needs to be notified of relevant NFC events such as RF field detection or applet selection via the NFC interface. Disclosure of such events to malicious applications can also potentially lead to denial of service attacks, that is why access to them needs to be controlled as well. The GP SE access control specification allows rules for controlling access to NFC events to be managed along with applet access rules by saving them on the SE. In Android, global events are implemented by using  broadcasts and interested applications can create and register a broadcast receiver component that will receive such broadcasts. Broadcast access can be controlled with standard Android signature-based permissions, but that has the disadvantage that only apps signed with the system certificate would be able to receive NFC events, effectively limiting SE apps to those created by the device manufacturer or MNO. Android 4.x therefore uses the same mechanism employed to control SE access -- whitelisting application certificates. Any application registered in nfcee_access.xml can receive the broadcasts listed below. As you can see, besides RF field detection and applet selection, Android offers notifications for higher-level events such as EMV card removal or MIFARE sector access. By adding a broadcast receiver to our test application as shown below, we were able to receive AID_SELECTED and RF field-related broadcasts. AID_SELECTED carries an extra with the AID of the selected applet, which allows us to start a related activity when an applet we support is selected. APDU_RECEIVED is also interesting because it carriers an extra with the received APDU, but that doesn't seem to be sent, at least not in our tests.

<receiver android:name="org.myapp.nfc.SEReceiver" >
<intent-filter>
<action android:name="com.android.nfc_extras.action.AID_SELECTED" />
<action android:name="com.android.nfc_extras.action.APDU_RECEIVED" />
<action android:name="com.android.nfc_extras.action.MIFARE_ACCESS_DETECTED" />
<action android:name="android.intent.action.MASTER_CLEAR_NOTIFICATION" />
<action android:name="com.android.nfc_extras.action.RF_FIELD_ON_DETECTED" />
<action android:name="com.android.nfc_extras.action.RF_FIELD_OFF_DETECTED" />
<action android:name="com.android.nfc_extras.action.EMV_CARD_REMOVAL" />
<action android:name="com.android.nfc.action.INTERNAL_TARGET_DESELECTED" />
</intent-filter>
</receiver>

Summary

We showed that Google Wallet installs a few applets on the SE when first initialized. Besides the expected EMV payment applets, if makes use of a controller applet for securely storing Wallet state and a MIFARE manager applet for reading/writing emulated card sectors from the app. While we can get some information about the EMV environment by sending commands to the SE from an app, payment applets cannot be selected via the wired SE interface, but only via the contactless NFC interface. Controller applet access is however available to third party apps, as long as they know the relevant APDU commands, which can easily be traced by logging. This might be one of the reasons why third party SE apps are not supported on Android yet. To make third party SE apps possible (besides offering a TSM solution),  Android needs to implement more-fined grained access control to the SE, for example by restricting what applets can be selected or limiting the range of allowed APDUs for whitelisted apps.


Emulating a PKI smart card with CyanogenMod 9.1

$
0
0
We discussed the embedded secure element available in recent Android devices, it's execution environment and how Google Wallet makes use if it in the last series of articles. We also saw that unless you have a contract with Google and have them (or the TSM they use) distribute your applets to supported devices, there is currently no way to install anything on the embedded secure element. We briefly mentioned that CyanogenMod 9.1 supports software card emulation and it is a more practical way to create your own NFC-enabled applications. We'll now see how software card emulation works and show how you can use it to create a simple PKI 'applet' that can be accessed via NFC from any machine with a contactless card reader.

Software card emulation

We already know that if the embedded secure element is put in virtual mode it is visible to external readers as a contactless smartcard. Software card emulation (sometimes referred to as Host Card Emulation or HCE) does something very similar, but instead of routing commands received by the NFC controller to the SE, it delivers them to the application processor, and they can be processed by regular applications. Responses are then sent via NFC to the reader, and thus your app takes the role of a virtual contactless 'smartcard' (refer to this paper for a more thorough discussion).  Software card emulation is currently available on BlackBerry phones, which offer standard APIs for apps to register with the OS and process card commands received over NFC. Besides a BlackBerry device, you can use some contactless  readers in emulation mode to emulate NFC tags or a full-featured smart card. Stock Android doesn't (yet) support software card emulation, even though the NFC controllers in most current phones have this capability. Fortunately, recent version of CyanogenMod integrate a set of patches that unlock this functionality of the PN544 NFC controller found in recent Nexus (and other) devices. Let's see how it works in a bit more detail.

CyanogenMod implementation

Android doesn't provide a direct interface to its NFC subsystem to user-level apps. Instead, it leverages the OS's intent and intent filter infrastructure to let apps register for a particular NFC event (ACTION_NDEF_DISCOVERED, ACTION_TAG_DISCOVERED and ACTION_TECH_DISCOVERED) and specify additional filters based on tag type or features. When a matching NFC tag is found, interested applications are notified and one of them is selected to handle the event, either by the user or automatically if it is in the foreground and has registered for foreground dispatch. The app can then access a generic Tag object representing the target NFC device and use it to retrieve a concrete tag technology interface such as MifareClassic or IsoDep that lets it communicate with the device and use its native features. Card emulation support in CyanogenMod doesn't attempt to change or amend Android's NFC architecture, but integrates with it by adding support for two new tag technologies: IsoPcdA and IsoPcdB. 'ISO' here is the International Organization for Standardization, which among other things, is responsible for defining NFC communication standards. 'PCD' stands for Proximity Coupling Device, which is simply ISO-speak for a contactless reader. The two classes cover the two main NFC flavours in use today (outside of Japan, at least) -- Type A (based on NXP technology) and Type B (based on Motorolla technology). As you might have guessed by now, the patch reverses the usual roles in the Android NFC API: the external contactless reader is presented as a 'tag', and 'commands' you send from the phone are actually replies to the reader-initiated communication. If you have Google Wallet installed the embedded secure element is activated as well, so touching the phone to a reader would produce a potential conflict: should it route commands to the embedded SE or to applications than can handle IsoPcdA/B tags? The CyanogenMod patch handles this by using Android's native foreground dispatch mechanism: software card emulation is only enabled for apps that register for foreground dispatch of the relevant tag technologies. So unless you have an emulation app in the foreground, all communication would be routed to Google Wallet (i.e., the embedded SE). In practice though, starting up Google Wallet on ROMs with the current version of the patch might block software card emulation, so it works best if Google Wallet is not installed. A fix is available, but not yet merged in CyanogenMod master (Updated: now merged, should roll out with CM10 nightlies) .

Both of the newly introduced tag technologies extend BasicTagTechnology and offer methods to open, check and close the connection to the reader. They add a public transceive() method that acts as the main communication interface: it receives reader commands and sends the responses generated by your app to the PCD. Here's a summary of the interface:

abstract class BasicTagTechnology implements TagTechnology {
public boolean isConnected() {...}

public void connect() throws IOException {...}

public void reconnect() throws IOException {...}

public void close() throws IOException {...}

byte[] transceive(byte[] data, boolean raw) throws IOException {...}
}

Now that we know (basically) how it works, let's try to use software card emulation in practice.

Emulating a contactless card

As discussed in the previous section, to be able to respond to reader commands we need to register our app for one of the PCD tag technologies and enable foreground dispatch. This is no different than handling stock-supported  NFC technologies. We need to add an intent filter and a reference to a technology filter file to the app's manifest:

<activity android:label="@string/app_name" 
android:launchmode="singleTop"
android:name=".MainActivity"
<intent-filter>
<action android:name="android.nfc.action.TECH_DISCOVERED" />
</intent-filter>

<meta-data android:name="android.nfc.action.TECH_DISCOVERED"
android:resource="@xml/filter_nfc" />
</activity>

We register the IsoPcdA tag technology in filter_nfc.xml:

<resources>
<tech-list>
<tech>android.nfc.tech.IsoPcdA</tech>
</tech-list>
</resources>

And then use the same technology list to register for foreground dispatch in our activity:

public class MainActivity extends Activity {

public void onCreate(Bundle savedInstanceState) {
pendingIntent = PendingIntent.getActivity(this, 0, new Intent(this,
getClass()).addFlags(Intent.FLAG_ACTIVITY_SINGLE_TOP), 0);
filters = new IntentFilter[] { new IntentFilter(
NfcAdapter.ACTION_TECH_DISCOVERED) };
techLists = new String[][] { { "android.nfc.tech.IsoPcdA" } };
}

public void onResume() {
super.onResume();
if (adapter != null) {
adapter.enableForegroundDispatch(this, pendingIntent, filters,
techLists);
}
}

public void onPause() {
super.onPause();
if (adapter != null) {
adapter.disableForegroundDispatch(this);
}
}

}

With this in place, each time the phone is touched to an active reader, we will get notified via the activity's onNewIntent() method. We can get a reference to the Tag object using the intent's extras as usual. However, since neither IsoPcdA nor its superclass are part of the public SDK, we need to either build the app as part of CyanogenMod's source, or, as usual, resort to reflection. We choose to create a simple wrapper class that calls IsoPcdA methods via reflection, after getting an instance using the static get() method like this:

Class cls = Class.forName("android.nfc.tech.IsoPcdA");
Method get = cls.getMethod("get", Tag.class);
// this returns an IsoPcdA instance
tagTech = get.invoke(null, tag);

Now after we connect() we can use the transceive() method to reply to reader commands. Note that since the API is not event-driven, you won't get notified with the reader command automatically. You need to send a dummy payload to retrieve the first reader command APDU. This can be a bit awkward at first, but you just have to keep in mind that each time you call transceive() the next reader command comes in via the return value. Unfortunately this means that after you send your last response, the thread will block on I/O waiting for transceive() to return, which only happens after the reader sends its next command, which might be never. The thread will only stop if an exception is thrown, such as when communication is lost after separating the phone from the reader. Needless to say, this makes writing robust code a bit tricky. Here's how to start off the communication:

// send dummy data to get first command APDU
// at least two bytes to keep smartcardio happy
byte[] cmd = transceive(new byte[] { (byte) 0x90, 0x00 });

Writing a virtual PKI applet

Software card emulation in CyanogneMod is limited to ISO 14443-4 (used mostly for APDU-based communication), which means that you cannot emulate cards that operate on a lower-level protocol such as MIFARE Classic. This leaves out opening door locks that rely on the card UID with your phone (the UID of the emulated card is random) or getting a free ride on the subway (you cannot clone a traffic card with software alone), but allows for emulating payment (EMV) cards which use an APDU-based protocol. In fact, the first commercial application (company started by patch author Doug Yeagerthat makes use of Android software card emulation, Tapp, emulates a contactless Visa card and does all necessary processing 'in the cloud', i.e., on a remote server. Payment applications are the ones most likely to be developed using software card emulation because of the potentially higher revenue: at least one other company has announced that it is building a cloud-based NFC secure element. We, however, will look at a different use case: PKI.

PKI has been getting a lot of bad rep due to major CAs getting compromised every other month, and it has been stated multiple times that it doesn't really work on the Internet. It is however still a valid means of authentication in a corporate environment where personal certificates are used for anything from desktop login to remote VPN access. Certificates and associated private keys are often distributed on smart cards, sometimes contactless or dual-interface. Since Android now has standard credential storage which can be protected by hardware on supported devices, we could use an Android phone with software card emulation in place of a PKI card. Let's try to write a simple PKI 'applet' and an associated host-side client application to see if this is indeed feasible.

A PKI JavaCard applet can offers various features, but the essential ones are:
  • generating or importing keys
  • importing a public key certificate
  • user authentication (PIN verification)
  • signing and/or encryption with card keys
Since we will be using Android's credential storage to save keys and certificates, we already have the first two features covered. All we need to implement is PIN verification and signing (which is actually sufficient for most applications, including desktop login and SSL client authentication). If we were building a real solution, we would implement a well known applet protocol, such as one of a major vendor or an open one, such as the MUSCLE card protocol, so that we can take advantage of desktop tools and cryptographic libraries (Windows CSPs and PKCS#11 modules, such as OpenSC). But since this is a proof-of-concept exercise, we can get away by defining our own mini-protocol and only implement the bare minimum. We define the applet AID (quite arbitrary, and may be in already in use by someone else, but there is really no way to check) and two commands: VERIFY PIN and SIGN DATA. The protocol is summarized in the table below:

Virtual PKI applet protocol
CommandCLAINSP1P2LcDataResponse
SELECT00A4040006AID: A00000000101109000/6985/6A82/6F00
VERIFY PIN8001XXXXPIN length (bytes)PIN characters (ASCII)9000/6982/6985/6F00
SIGN DATA8002XXXXSigned data length (bytes)Signed data9000+signature bytes/6982/6985/6F00

The applet behaviour is rather simple: it returns a generic error if you try to send any commands before selecting it, and then requires you to authenticate by verifying the PIN before signing data. To implement the applet, we first handle new connections from a reader in the main activity's onNewIntent() method, where we receive an Intent containing a reference to the IsoPcdA object we use to communicate with the PCD. We verify that the request comes from a card reader, create a wrapper for the Tag object, connect() to the reader and finally pass control to the PkiApplet by calling it's start() method.

Tag tag = (Tag) intent.getExtras().get(NfcAdapter.EXTRA_TAG);
List techList = Arrays.asList(tag.getTechList());
if (!techList.contains("android.nfc.tech.IsoPcdA")) {
return;
}

TagWrapper tw = new TagWrapper(tag, "android.nfc.tech.IsoPcdA");
if (!tw.isConnected()) {
tw.connect();
}

pkiApplet.start(tw);

The applet in turn starts a background thread that reads commands until available and exits if communication with the reader is lost. The implementation is not terribly robust, but is works well enough for our POC:

Runnable r = new Runnable() {
public void run() {
try {
// send dummy data to get first command APDU
byte[] cmd = transceive(new byte[] { (byte) 0x90, 0x00 });
do {
// process commands
} while (cmd != null && !Thread.interrupted());
} catch (IOException e) {
// connection with reader lost
return;
}
}
};

appletThread = new Thread(r);
appletThread.start();


Before the applet can be used it needs to be 'personalized'. In our case this means importing the private key the applet will use for signing and setting a PIN. To initialize the private key we import a PKCS#12 file using the KeyChain API and store the private key alias in shared preferences. The PIN is protected using 5000 iterations of PBKDF2 with a 64-bit salt. We store the resulting PIN hash and the salt in shared preferences as well and repeat the calculation against the PIN we receive from applet clients to check if it matches. This avoids storing the PIN in clear text, but keep in mind that a short numeric-only PIN can be brute-forced in minutes (the app doesn't restrict PIN size, it can be up to 255 characters (bytes), the maximum size of APDU data). Here's how our 'personalization' UI looks like:


To make things simple, applet clients send the PIN in clear text, so it could theoretically be sniffed if NFC traffic is intercepted. This can be avoided by using some sort of a challenge-response mechanism, similar to what 'real' (e.g., EMV) cards do. Once the PIN is verified, clients can send the data to be signed and receive the signature bytes in the response. Since the size of APDU data is limited to 255 bytes (due to the single byte length field) and the applet doesn't support any sort of chaining, we are limited to using RSA keys up to 1024 bits long (a 2048-bit key needs 256 bytes). The actual applet implementation is quite straightforward: it does some minimal checks on received APDU commands, gets the PIN or signed data and uses it to execute the corresponding operation. It then selects a status code based on operation success or failure and returns it along with the result data in the response APDU. See the source code for details.

Writing a host-side applet client

Now that we have an applet, we need a host-side client to actually make use of it. As we mentioned above, for a real-world implementation this would be a standard PKCS#11 or CSP module for the host operating system that plugs into PKI-enabled applications such as browsers or email and VPN clients. We'll however create our own test Java client using the Smart Card I/O API (JSR 268). This API comes with Sun/Oracle Java SDKs since version 1.6 (Java 6), but is not officially a part of the SDK, because it is apparently not 'of sufficiently wide interest' according to the JSR expert group (committee BS at its best!). Eclipse goes as far as to flag it as a 'forbidden reference API', so you'll need to change error handling preferences to compile in Eclipse. In practice though, JSR 268 is a standard API that works fine on Windows, Solaris, Linux an Mac OS X (you may have to set the sun.security.smartcardio.library system property to point to your system's PC/SC library), so we'll use it for our POC application. The API comes with classes representing card readers, the communication channel and command and response APDUs. After we get a reference to a reader and then a card, we can create a channel and exchange APDUs with the card. Our PKI applet client is a basic command line program that waits for card availability and then simply sends the SELECT, VERIFY PIN and SIGN DATA commands in sequence, bailing out on any error (card response with status different from 0x9000). The PIN is specified in the first command line parameter and if you pass a certificate file path as the second one, it will use it to verify the signature it gets from the applet. See full code for details, but here's how to connect to a card and send a command:

TerminalFactory factory = TerminalFactory.getDefault();
CardTerminals terminals = factory.terminals();

Card card = waitForCard(terminals);
CardChannel channel = card.getBasicChannel();
CommandAPDU cmd = new CommandAPDU(CMD);
ResponseAPDU response = channel.transmit(cmd);

Card waitForCard(CardTerminals terminals)
throws CardException {
while (true) {
for (CardTerminal ct : terminals
.list(CardTerminals.State.CARD_INSERTION)) {
return ct.connect("*");
}
terminals.waitForChange();
}
}

And to prove that this all works, here's the output from a test run of the client application:

$ ./run.sh 1234 mycert.crt 
Place phone/card on reader to start
--> 00A4040006A0000000010101
<-- 9000
--> 800100000431323334
<-- 9000
--> 80020000087369676E206D6521
<-- 11C44A5448... 9000 (128)

Got signature from card: 11C44A5448...
Will use certificate from 'mycert.crt' to verify signature
Issuer: CN=test-CA, ST=Tokyo, C=JP
Subject: CN=test, ST=Tokyo, C=JP
Not Before: Wed Nov 30 00:04:31 JST 2011
Not After: Thu Nov 29 00:04:31 JST 2012

Signature is valid: true

This software implementation comes, of course, with the disadvantage that while the actual private key might be protected by Android's system key store, PIN verification and other operations not directly protected by the OS will be executed in a regular app. An Android app, unlike a dedicated smart card, could be compromised by other (malicious) apps with sufficient privileges. However, since recent Android devices do have (some) support for a Trusted Execution Environment (TEE), the sensitive parts of our virtual applet can be implemented as Trusted Application (TA) running within the TEE. The user-level app would then communicate with the TA using the controlled TEE interface, and the security level of the system could come very close to running an actual applet in a dedicated SE.

Summary

Android already supports NFC card emulation using an embedded SE (stock Android) or the UICC (various vendor firmwares). However, both of those are tightly controlled by their owning entities (Google or MNOs), and there is currently no way for third party developers to install applets and create card emulation apps. An alternative to SE-based card emulation is software card emulation, where an user-level app processes reader commands and returns responses via the NFC controller. This is supported by commonly deployed NFC controller chips, but is not implemented in the stock Andorid NFC subsystem. Recent versions of CyanogneMod however do enable it by adding support for two more tag technologies (IsoPcdA and IsoPcdB) that represent contactless readers instead of actual tags. This allows Android applications to emulate pretty much any ISO 14443-4 compliant contactless card application: from EMV payment applications to any custom JavaCard applet. We presented a sample app that emulates a PKI card, allowing you to store PKI credentials on your phone and potentially use it for desktop login or VPN access on any machine equipped with a contacltess reader. Hopefully software card emulation will become a part of stock Android in the future, making this and other card emulation NFC applications mainstream.

Android online account management

$
0
0
Our recent posts covered NFC and the secureelement as supported in recent Android versions, including community ones. In this two-part series we will take a completely different direction: managing online user accounts and accessing Web services. We will briefly discuss how Android manages user credentials and then show how to use cached authentication details to log in to most Google sites without requiring additional user input. Most of the functionality we shall discuss is hardly new -- it has been available at least since Android 2.0. But while there is ample documentation on how to use it, there doesn't see to be a 'bigger picture' overview of how the pieces are tied together. This somewhat detailed investigation was prompted by trying to develop an app for a widely used Google service that unfortunately doesn't have an official API and struggling to find a way to login to it using cached Google credentials. More on this in the second part, let's first see how Android manages accounts for online services.

Android account management

Android 2.0 (API Level 5, largely non-existent, because it was quickly succeeded by 2.0.1, Level 6), introduced the concept of centralized account management with a public API. The central piece in the API is the AccountManager class which, quote: 'provides access to a centralized registry of the user's online accounts. The user enters credentials (user name and password) once per account, granting applications access to online resources with "one-click" approval.' You should definitely read the full documentation of the class, which is quite extensive, for more details. Another major feature of the class is that it lets you get an authentication token for supported accounts, allowing third party applications to authenticate to online services without needing to handle the actual user password (more on this later). It also has a whole of 5 methods that allow you to get an authentication token, all but one with at least 4 parameters, so finding the one you need might take some time, with yet some more to get the parameters right. It might be a good idea to start with the synchronous blockingGetAuthToken() and work your way from there once you have a basic working flow. On some older Android versions, the AccountManager would also monitor your SIM card and wipe cached credentials if you swapped cards, but fortunately this 'feature' has been removed in Android 2.3.4.

The AccountManager, as most Android system API's, is just a facade for the AccountManagerService which does the actual work. The service doesn't provide an implementation for any particular form of authentication though. It only acts as a coordinator for a number of pluggable authenticator modules for different account types (Google, Twitter, Exchange, etc.). The best part is that any application can register an authentication module by implementing an account authenticator and related classes, if needed. Android Training has a tutorial on the subject that covers the implementation details, so we will not discuss them here. Registering a new account type with the system lets you take advantage of a number of Android infrastructure services:
  • centralized credential storage in a system database
  • ability to issue tokens to third party apps
  • ability to take advantage of Android's automatic background synchronization
One thing to note is that while credentials (usually user names and passwords) are stored in a central database (/data/system/accounts.db or /data/system/user/0/accounts.db on Jelly Bean and later for the first system user), that is only accessible to system applications, credentials are in no way encrypted -- that is left to the authentication module to implement as necessary. If you have a rooted device (or use the emulator) listing the contents of the accounts table might be quite instructive: some of your passwords, especially for the stock Email application, will show up in clear text. While the AccountManger has a getPassword() method, it can only be used by apps with the same UID as the account's authenticator, i.e., only by classes in the same app (unless you are using sharedUserId, which is not recommended for non-system apps). If you want to allow third party applications to authenticate using your custom accounts, you have to issue some sort of authentication token, accessible via one of the many getAuthToken() methods. Once your account is registered with Android, if you implement an additional sync adapter, you can register to have it called at a specified interval and do background syncing for you app (one- or two-way), without needing to manage scheduling yourself. This is a very powerful feature that you get practically for free, and probably merits its own post. As we now have a basic understanding of authentication modules, let's see how they are used by the system.

As we mentioned above, account management is coordinated by the AccountManagerService. It is a fairly complex piece of code (about 2500 lines in JB), most of the complexity stemming from the fact that it needs to communicate with services and apps that span multiple processes and threads within each process, and needs to take care of synchronization and delivering results to the right thread. If we abstract out the boilerplate code, what it does on a higher level is actually fairly straightforward:
  • on startup it queries the PackageManager to find out all registered authenticators, and stores references to them in a map, keyed by account type
  • when you add an account of a particular type, it saves its type, username and password to the accounts table
  • if you get, set or reset the password for an account, it accesses or updates the accounts table accordingly
  • if you get or set user data for the account, it is fetched from or saves to the extras table
  • when you request a token for a particular account, things become a bit more interesting:
    • if a token with the specified type has never been issued before, it shows a confirmation activity asking (see screenshot below) the user to approve access for the requesting application. If they accept, the UID of the requesting app and the token type are saved to the grants table.
    • if a grant already exits, it checks the authtoken table for tokens matching the request. If a valid one exists, it is returned.
    • if a matching token is not found, it finds the authenticator for the specified account type in the map and calls its getAuthToken() method to request a token. This usually involves the authenticator fetching the username and password from the accounts table (via the getPassword() method) and calling its respective online service to get a fresh token. When one is returned, it gets cached in the authtokens table and then returned to the requesting app (usually asynchronously via a callback).
  • if you invalidate a token, it gets deleted from the authtokens table

Now that we know how Android's account management system works, let's see how it is implemented for the most widely used account type.

Google account management

    Usually the first thing you do when you turn on your brand new (or freshly wiped) 'Google Experience' Android device is to add a Google account. Once you authenticate successfully, you are offered to sync data from associated online services (GMail, Calendar, Docs, etc.) to your device. What happens behinds the scenes is that an account of type 'com.google' is added via the AccountManager, and a bunch of Google apps start getting tokens for the services they represent. Of course, all of this works with the help of an authentication provider for Google accounts. Since it plugs in the standard account management framework, it works by registering an authenticator implementation and using it involves the sequence outlined above. However, it is also a little bit special. Three main things make it different:
    • it is not part of any particular app you can install, but is bundled with the system
    • a lot of the actual functionality is implemented on the server side
    • it does not store passwords in plain text on the device
    If you have ever installed a community ROM built off AOSP code, you know that in order to get GMail and other Google apps to work on your device, you need a few bits not found in AOSP. Two of the required pieces are the Google Services Framework (GSF) and the Google Login Service (GLS). The former provides common services to all Google apps such as centralized settings and feature toggle management, while the latter implements the authentication provider for Google accounts and will be the topic of this section.

    Google provides a multitude of online services (not all of which survive for long), and consequently a bunch of different methods to authenticate to those. Android's Google Login Service, however doesn't call those public authentication API's directly, but via a dedicated online service, which lives at android.clients.google.com. It has endpoints both for authentication and authorization token issuing, as well as data feed (mail, calendar, etc.) synchronization, and more. As we shall see, the supported methods of authentication are somewhat different from those available via other public Google authentication API's. Additionally, it supports a few 'special' token types that greatly simplify some complex authentication flows.

    All of the above is hardly surprising: when you are dealing with online services it is only natural to have as much as possible of the authentication logic on the server side, both for ease of maintenance and to keep it secure. Still, to kick start it you need to store some sort of credentials on the device, especially when you support background syncing for practically everything and you cannot expect people to enter them manually. On-device credential management is one of the services GLS provides, so let's see how it is implemented. As mentioned above, GLS plugs into the system account framework, so cached credentials, tokens and associated extra data are stored in the system's accounts.db database, just as for other account types. Inspecting it reveals that Google accounts have a bunch of Base64-encoded strings associated with them. One of the user data entries (in the extras table) is helpfully labeled sha1hash (but does not exist on all Android versions) and the password (in the accounts table) is a long string that takes different formats on different Android versions. Additionally, the GSF database has a google_login_public_key entry, which when decoded suspiciously resembles a 1024-bit RSA public key. Some more experimentation reveals that credential management works differently on pre-ICS and post-ICS devices. On pre-ICS devices, GLS stores an encrypted version of your password and posts it to the server side endpoints both when authenticating for the first time (when you add the account) and when it needs to have a token for a particular service issued. On post-ICS devices, it only posts the encrypted password the first time, and gets a 'master token' in exchange, which is then stored on the device (in the password column of the accounts database). Each subsequent token request uses that master token instead of a password.

    Let's look into the cached credential strings a bit more. The encrypted password is 133 bytes long, and thus it is a fair bet that it is encrypted with the 1024-bit (128 bytes) RSA public key mentioned above, with some extra data appended. Adding multiple accounts that use the same password produces different password strings (which is a good thing), but the first few bytes are always the same, even on different devices. It turns out those identify the encryption key and are derived by hashing its raw value and taking the leading bytes of the resulting hash. At least from our limited sample of Android devices, it would seem that the RSA public key used is constant both across Android versions and accounts. We can safely assume that its private counterpart lives on the server side and is used to decrypt sent passwords before performing the actual authentication. The padding used is OAEP (with SHA1 and MGF1), which produces random-looking messages and is currently considered secure (at least when used in combination with RSA) against most advanced cryptanalysis techniques. It also has quite a bit of overhead, which in practice means that the GLS encryption scheme can encrypt at most 86 bytes of data. The outlined encryption scheme is not exactly military-grade and there is the issue of millions of devices most probably using the same key, but recovering the original password should be sufficiently hard to discourage most attackers. However, let's not forget that we also have a somewhat friendlier SHA1 hash available. It turns out it can be easily reproduced by 'salting' the Google account password with the account name (typically GMail address) and doing a single round of SHA1. This is considerably easier to do and it wouldn't be too hard to precompute a bunch of hashes based on commonly used or potential passwords if you knew the target account name.

    Fortunately, newer version of Android (4.0 and later) no longer store this hash on the device. Instead of the encrypted password+SHA1 hash combination they store an opaque 'master token' (most probably some form of OAuth token) in the password column and exchange it for authentication tokens for different Google services. It is not clear whether this token ever expires or if it is updated automatically. You can, however, revoke it manually by going to the security settings of your Google account and revoking access for the 'Android Login Service' (and a bunch of other stuff you never use while you are at it). This will force the user to re-authenticate on the device next time it tries to get a Google auth token, so it is also somewhat helpful if you ever lose your device and don't want people accessing your email, etc. if they manage to unlock it. The service authorization token issuing protocol uses some device-specific data in addition to the master token, so obtaining only the master token should not be enough to authenticate and impersonate a device (it can however be used to login into your Google account on the Web, see the second part for details).

    Google Play Services

    Google Play Services (we'll abbreviate it to GPS, although the actual package is com.google.android.gms, guess where the 'M' came from) was announced at this year's Google I/O as an easy to use platform that offers integration with Google products for third-party Android apps. It was actually rolled out only a month ago, so it's probably not very widely used yet. Currently it provides support for OAuth 2.0 authorization to Google API's 'with a good user experience and security', as well some Google+ plus integration (sign-in and +1 button). Getting OAuth 2.0 tokens via the standard AccountManager interface has been supported for quite some time (though support was considered 'experimental') by using the special 'oauth2:scope' token type syntax. However, it didn't work reliably across different Android builds, which have different GLS versions bundled and this results in slightly different behaviour. Additionally, the permission grant dialog shown when requesting a token was not particularly user friendly, because it showed the raw OAuth 2.0 scope in some cases, which probably means little to most users (see screenshot in the first section). While some human-readable aliases for certain scopes where introduced (e.g., 'Manage your taks' for 'oauth2:https://www.googleapis.com/auth/tasks'), that solution was neither ideal, nor universally available. GPS solves this by making token issuing a two-step process (newer GLS versions also use this process):
    1. the first request is much like before: it includes the account name, master token (or encrypted password pre-ICS) and requested service, in the 'oauth2:scope' format. GPS adds two new parameters: requesting app package name and app signing certificate SHA1 hash (more on this later). The response includes some human readable details about the requested scope and requesting application, which GPS shows in a permission grant dialog like the one shown below.
    2. if the users grants the permission, this decision is recorded in the extras table in a proprietary format which includes the requesting app's package name, signing certificate hash, OAuth 2.0 scope and grant time (note that it is not using the grants table). GPS then resends the authorization request setting the has_permission parameter to 1. On success this results in an OAuth 2.0 token and its expiry date in the response. Those are cached in the authtokens table in a similar format.

    To be able to actually use a Google API, you need to register your app's package name and signing key in Google's API console. The registration lets services validating the token query Google what app the token was issued for, and thus identify the calling app. This has one subtle, but important side-effect: you don't have to embed an API key in your app and send it with every request. Of course, for a third party published app you can easily find out both the package name and the signing certificate so it is not particularly hard to get a token issued in the name of some other app (not possible via the official API, of course). We can assume that there are some additional checks on the server side that prevent this, but theoretically, if you used such a token you could, for example, exhaust a third-party app's API request quota by issuing a bunch of requests over a short period of time. 

    The actual GPS implementation seems to reuse much of the original Google Login Service authentication logic, including the password encryption method, which is still used on pre-ICS devices (the protocol is, after all, mostly the same and it needs to be able to use pre-existing accounts). On top of that it adds better OAuth 2.0 support, a version-specific account selection dialog and some prettier and more user friendly permission grant UIs. The GPS app has the Google apps shared UID, so it can directly interact with other proprietary Google services, including GLS and GSF. This allows it, among other things, to directly get and write Google account credentials and tokens to the accounts database. As can be expected, GPS runs in a remote service that the client library you link into your app accesses. The major selling point against the legacy AccountManager API is that while its underlying authenticator modules (GLS and GSF) are part of the system, and as such cannot be updated without an OTA, GPS is an user-installable app that can be easily updated via Google Play. Indeed, it is advertised as auto-updating (much like the Google Play Store client), so app developers presumably won't have to rely on users to update it if they want to use newer features (unless GPS is disabled altogether, of course). This update mechanism is to provide 'agility in rolling out new platform capabilities', but considering how much time the initial roll-out took, it is to be seen how agile the whole thing will turn out to be. Another thing to watch out for is feature bloat: besides OAuth 2.0 support, GPS currently includes G+ and AdMob related features, and while both are indeed Google-provided services, they are totally unrelated. Hopefully, GPS won't turn into a 'everything Google plus the kitchen sink' type of library, delaying releases even more. With all that said, if your app uses OAuth 2.0 tokens to authenticate to Google API's, which is currently the preferred method (ClientLogin, OAuth 1.0 and AuthSub have been officially deprecated), definitely consider using GPS over 'raw' AccountManager access.

    Summary

    Android provides a centralized registry of user online accounts via the AccountManager class. It lets you both get tokens for existing accounts without having to handle the actual credentials and register your own account type, if needed. Registering an account type gives you access to powerful system features, such as authentication token caching and automatic background synchronization. 'Google experience' devices come with built-in support for Google accounts, which lets third party apps access Google online services without needing to directly request authentication information from the user. The latest addition to this infrastructure is the recently released Google Play Services app and companion client library, which aim to make it easy to use OAuth 2.0 from third party applications. 

    We've now presented an overview of how the account management system works, and the next step is to show how to actually use it to access a real online service. That will be the topic of the second article in the series. 

    Single sign-on to Google sites using AccountManager

    $
    0
    0
    In the first part of this series, we presented how the standard Android online account management framework works and explored how Google account authentication and authorization modules are implemented on Android. In this article we will see how to use the Google credentials stored on the device to log in to Google Web sites automatically. Note that this is different from using public Google API's, which generally only requires putting an authentication token (and possibly an API key) in a request header, and is quite well supported by the Google APIs Client Library. First, some words on what motivated this whole exercise (may include some ranting, feel free to skip to the next section).

    Android developer console API: DIY

    If you have ever published an application on the Android Market Google Play Store, you are familiar with the Android developer console. Besides letting you publish and update your apps, it also shows the number of total and active installs (notoriously broken and not too be taken too seriously, though it's been getting better lately), ratings and comments. Depending on how excited about the whole app publishing business you are, you might want to check it quite often to see how your app is doing, or maybe you just like hitting F5. Most people don't however, so pretty much every developer at some point comes up with the heretic idea that there must be a better way: you should be able to check your app's statistics on your Android device (obviously!), you should get notified about changes automatically and maybe even be able to easily see if today's numbers are better than yesterday's at a glance. Writing such a tool should be fairly easy, so you start looking for an API. If your search ends up empty it's not your search engine's fault: there is none! So before you start scraping those pretty Web pages with your favourite P-language, you check if someone has done this before -- you might get a few hits, and if you are lucky even find the Android app.

    Originally developed by Timelappse, and now open source, Andlytics does all the things mentioned above, and more (and if you need yet another feature, consider contributing). So how does it manage to do all of this without an API? Through blood, sweat and a lot of protocol reversing guessing. You see, the current developer console is built on GWT which used to be Google's webstack-du-jour a few years back. GWT essentially consists of RPC endpoints at the server, called by a JavaScript client running in the browser. The serialization protocol in between is a custom one, and the specification is purposefully not publicly available (apparently, to allow for easier changes!?!). It has two main features: you need to know exactly how the transferred objects look like to be able to make any sense of it, and it was obviously designed by someone who used to write compilers for a living before they got into Web development ('string table' ring a bell?). Given the above, Andlytics was quite an accomplishment. Additionally, the developer console changing its protocol every other week and adding new features from time to time didn't really make it any easier to maintain. Eventually, the original developer had a bit too much GWT on his plate, and was kind enough to open source it, so others could share the pain.

    But there is a bright side to all this: Developer Console v2. It was announced at this year's Google I/O to much applause, but was only made universally available a couple of weeks ago (sound familiar?). It is a work in progress, but is showing promise. And the best part: it uses perfectly readable (if a bit heavy on null's) JSON to transport data! Naturally, there was much rejoicing at the Andlytics Github project. It was unanimously decided that the sooner we obliterate all traces of GWT, the better, and the next version should use the v2 console 'API'. Deciphering the protocol didn't take long, but it turned out that while to log in to the v1 console all you needed was a ClientLogin (see the next section for an explanation) token straight out of Android's AccountManger, the new one was not so forgiving and the login flow was somewhat more complex. Asking the user for their password and using it to login was obviously doable, but no one would like that, so we needed to figure out how to log in using the Google credentials already cached on the device. Android browser and Chrome are able to automatically log you in to the developer console without requiring your password, so it was clearly possible. The process is not really documented though, and that prompted this (maybe a bit too wide-cast) investigation. Which finally leads us to the topic of this post: to show how to use cached Google account credentials for single sign-on. Let's first see what standard ways are available to authenticate to Google's public services and API's.

    Google services authentication and authorization

    The official place to start when selecting an auth mechanism is the Google Accounts Authentication and Authorization page. It lists quite a few protocols, some open and some proprietary. If you research further you will find that currently all but OAuth 2.0 and Open ID are considered deprecated, and using the proprietary ones is not recommended. However, a lot of services are still using older, proprietary protocols, so we will look into some of those as well. Most protocols also have two variations: one for Web applications and one for the so called, 'installed applications'. Web applications run in a browser, and are expected to be able to take advantage of all standard browser features: rich UI, free-form user interaction, cookie store and ability to follow redirects. Installed applications, on the other hand, don't have a native way to preserve session information, and may not have the full Web capabilities of a browser. Android native applications (mostly) fall in the 'installed applications' category, so let's see what protocols are available for them.

    ClientLogin

    The oldest and most widely used till now authorization protocol for installed applications is ClientLogin. It assumes the application has access to the user's account name and password and lets you get an authorization token for a particular service, that can be saved and used for accessing that service on behalf of the user. Services are identified by proprietary service names, for example 'cl' for Google Calendar and 'ah' for Google App engine. A (non-exhaustive) list of supported service names can be found in the Google Data API reference. Here are a few Android-specific ones, not listed in the reference: 'ac2dm', 'android', 'androidsecure', 'androiddeveloper', 'androidmarket' and 'youngandroid' (probably for the discontinued App Inventor). The token can be fairly long-lived (up to two weeks), but cannot be refreshed and the application needs to obtain a new token when it expires. Additionally, there is no way to validate the token short of accessing the associated service: if you get an OK HTTP status (200), it is still valid, if 403 is returned you need to consult the additional error code and retry or get a new token. Another limitation is that ClientLogin tokens don't offer fine grained access to a service's resources: access is all or nothing, you cannot specify read-only access or access to a particular resource only. The biggest drawback for use in mobile apps though is that ClientLogin requires access to the actual user password. Therefore, if you don't want to force users to enter it each time a new token is required, it needs to be saved on the device, which poses various problems. As we saw in the previous post, in Android this is handled by GLS and the associated online service by storing an encrypted password or a master token on the device. Getting a token is as simple as calling the appropriate AccountManger method, which either returns a cached token or issues an API request to fetch a fresh one. Despite it's many limitations, the protocol is easy to understand and straightforward to implement, so it has been widely used. It has been officially deprecated since April 2012 though, and apps using it are encouraged to migrate to OAuth 2.0, but this hasn't quite happened yet. 

    OAuth 2.0

    No one likes OAuth 1.0 (except Twitter) and AuthSub is not quite suited for native applications, so we will only look at the currently recommended OAuth 2.0 protocol. OAuth 2.0 has been in the works for quite some time, but it only recently became an official Internet standard. It defines different authorization 'flows', aimed at different use cases, but we will not try to present all of them here. If you are unfamiliar with the protocol, refer to one of the multiple posts that aim to explain it at a higher level, or just read the RFC if you need the details.  And, of course, you can watch for this for a slightly different point of view. We will only discuss how OAuth 2.0 relates to native mobile applications.

    The OAuth 2.0 specification defines 4 basic flows for getting an authorization token for a resource, and the two ones that don't require the client (in our scenario an Android app) to directly handle user credentials (Google account user name and password), namely the authorization code grant flow and the implicit grant flow, both have a common step that needs user interaction. They both require the authorization server (Google's) to authenticate the resource owner (the user of the our Android app) and establish whether they grant or deny the access request for the specified scope (e.g., read-only access to profile information). In a typical Web application that runs in a browser, this is very straightforward to do: the user is redirected to an authentication page, then to a access grant page that basically says 'Do you allow app X to access data Y and Z?', and if they agree, another redirect, which includes an authorization token, takes them back to the original application. The browser simply needs to pass on the token in the next request to gain access to the target resource. Here's an official Google example that uses the implicit flow: follow this link and grant access as requested to let the demo Web app display your Google profile information. With a native app things are not that simple. It can either
    • use the system browser to handle the permission grant step, which would typically involve the following steps:
      • launch the system browser and hope that the user will finish the authentication and permission grant process
      • detect success or failure and extract the authorization token from the browser on success (from the window title, redirect URL or the cookie store)
      • ensure that after granting access, the user ends up back in your app
      • finally, save the token locally and use it to issue the intended Web API request
    • embed a WebView or a similar control in the apps's UI. Getting a token would generally involve these steps:
      • in the app's UI, instruct the user what to do and load the login/authorization page
      • register for a 'page loaded' callback, and check for the final success URL each time it's called
      • when found, extract the token from the redirect URL or the WebView's cookie jar and save it locally
      • finally use the token to send the intended API request
    Neither is ideal, both are confusing to the user and to implement the first one on Android you might event have to (temporarily) start a Web server (redirect_uri is set to http://localhost in the API console, so you can't just use a custom scheme). The second one is generally preferable, if not pretty: here's an (somewhat outdated) overview of what needs to be done and a more recent example with full source code. This integration complexity and UI impedance mismatch are the problems that OAuth 2.0 support via the AccountManager initially, and recently Google Play Services aim to solve. When using either of those, user authentication is implemented transparently by passing the saved master token (or encrypted password) to the server side component, and instead of a WebView with a permission grant page, you get the Android native access grant dialog. If you approve, a second request is sent to convey this and the returned access token is directly delivered to the requesting app. This is essentially the same flow as for Web applications, but has the advantages that it doesn't require context switching from native to browser and back, and is much more user friendly. Of course, it only works for Google accounts, so if you wanted to write, say, a Facebook client, you still have to use a WebView to process the access permission grant and get an authorization token.

    Now that we have an idea what authentication methods are available, let's see if we can use them to access an online Google service that doesn't have a dedicated API.

    Google Web properties single sign-on

    Being able to access multiple related, but separate services without needing to authenticate to each one individually is generally referred to as single sign-on (SSO). There are multiple standard ways to accomplish this for different contexts, ranging from Kerberos to SAML-based solutions. We will use the term here in a narrower meaning: being able to use different Google services (Web sites or API's) after having authenticated to only one of them (including the Android login service). If you have a fairly fast Internet connection, you might not even notice it, but after you log in to, say, Gmail, clicking on YouTube links will take you to a completely different domain, and yet you will be able to comment on that neat cat video without having to log in again. If you have a somewhat slower connection and a wide display though, you may notice that there is a lot of redirecting and long parameter passing, with the occasional progress bar going on. What happens behind the scenes is that your current session cookies and authentication tokens are being exchanged for yet other tokens and more cookies, to let you seamlessly log in to that other site. If you are curious, you can observe the flow with Chrome's built-in developer tools (or similar plugins for other browsers), or check out our sample. All of those requests and responses are essentially a proprietary SSO protocol (Google's), which is not really publicly documented anywhere, and, of course, is likely to change fairly often as Google rolls out upgrades to their services. With that said, there is a distinct pattern, and on a higher level you only have two main cases. We are deliberately ignoring the persistent cookie ('Stay signed in')  scenario for simplicity's sake.
    • Case 1: you haven't authenticated to any of the Google properties. If you access, for example, mail.google.com in that state you will get a login screen originating at https://accounts.google.com/ServiceLogin with parameters specifying the service you are trying to access ('mail' for Gmail) and where to send you after you are authenticated. After you enter your credentials, you will generally get redirected a few times around the accounts.google.com, which will set a few session cookies, common (Domain=.google.com) for all services (always SID and LSID, plus a few more). The last redirect will be to the originally requested service and include an authentication token in the redirected location (usually specified with the auth parameter, e.g.: https://mail.google.com/mail/?auth=DQAAA...). The target service will validate the token and set a few more service-specific sessions cookies, restricted by domain and path, and with the Secure and HttpOnly flags set. From there, it might take a couple of more redirects before you finally land at an actual content page.
    • Case 2: you have already authenticated to at least one service (Gmail in our example). In this state, if you open, say, Calendar, you will go through https://accounts.google.com/ServiceLogin again, but this time the login screen won't be shown. The accounts service will modify your SID and LSID cookies, maybe set a few new ones and finally redirect you the original service, adding an authentication token to the redirect location. From there the process is similar: one or more service-specific cookies will be set and you will finally be redirected to the target content.
    Those flows obviously work well for browser-based logins, but since we are trying to do this from an Android app, without requiring user credentials or showing WebView's, we have a different scenario. We can easily get a ClientLogin or an OAuth 2.0 token from the AccountManager, but since we are not preforming an actual Web login, we have no cookies to present. The question becomes: is there a way to log in with a standard token alone? Since tokens can be used with the data APIs (where available) of each service, they obviously contain enough information to authenticate us and grant access to the service's resources. What we need is an Web endpoint, that will take our token and give us a set of cookies we could use to access the corresponding Web site in exchange. Clues and traces of such a service are scattered around the Internet, mostly in the code of unofficial Google client libraries and applications. Once we know it is definitely possible, the next problem becomes getting it to work with Android's AccountManger.

    Logging in using AccountManager

    The only real documentation we could find, besides code comments and READMEs of the unofficial Google client applications mentioned above, is a short Chromium OS design document. It tells us that the standard (at the time) login API for installed applications, ClientLogin, alone is not enough to accomplish Web SSO, and outlines a three step process that lets us exchange ClientLogin tokens for session cookies valid for a particular service:
    1. Get a ClientLogin token (this we can do via the AccountManager)
    2. Pass it to https://www.google.com/accounts/IssueAuthToken, to get a one-time use, short-lived token that will authenticate the user to any service (the so called, 'ubertoken')
    3. Finally, pass the ubertoken to https://www.google.com/accounts/TokenAuth, to exchange it for the full set of browser cookies we need to do SSO
    This outlines the process, but is a little light on the details. Fortunately, those can be found in the Chromium OS source code, as well as a few other projects. After a fair bit of digging, here's what we uncovered:
      1. To get the mythical ubertoken, you need to pass the SID and LSID cookies to the IssueAuthToken endpoint like this:
        https://www.google.com/accounts/IssueAuthToken?service=gaia&Session=false&SID=sid&LSID=lsid
      2. The response will give you the ubertoken, which you pass to the TokenAuth endpoint along with the URL of the service you want to use:
        https://www.google.com/accounts/TokenAuth?source=myapp&auth=ubertoken&continue=service-URL
      3. If the token check out OK, the response will give you a URL to load. If your HTTP client is set up to follow redirects automatically, once you load it, needed cookies will be set automatically (just as in a browser), and you will finally land on the target site. As long as you keep the same session (which usually means the same HTTP client instance) you will be able to issue multiple requests, without needing to go through the authentication flow again.
      What remains to be seen is, can we implement this on Android. As usual, it turns out that there is more than one way to do it:

      The hard way

      The straightforward way would be to simply implement the flow outlined above using your favourite HTTP client library. We choose to use Apache HttpClient, which supports session cookies and multiple requests using a single instance out of the box. The first step calls for the SID and LSID cookies though, not an authentication token: we need cookies to get a token, in order to get more cookies. Since Android's AccountManager can only give us authentication tokens, and not cookies, this might seem like a hopeless catch-22 situation. However, while browsing the authtokens table of the system's accounts database earlier, we happened to notice that it actually had a bunch of tokens with type SID and LSID. Our next step is, of course, to try to request those tokens via the AccountManager interface, and this happens to work as expected:

      String sid = am.getAuthToken(account, "SID", null, activity, null, null)
      .getResult().getString(AccountManager.KEY_AUTHTOKEN);
      String lsid = am.getAuthToken(account, "LSID", null, activity, null, null)
      .getResult().getString(AccountManager.KEY_AUTHTOKEN);

      Having gotten those, the rest is just a matter of issuing two HTTP requests (error handling omitted for brevity):

      String TARGET_URL = "https://play.google.com/apps/publish/v2/";
      Uri ISSUE_AUTH_TOKEN_URL =
      Uri.parse("https://www.google.com/accounts/IssueAuthToken?service=gaia&Session=false");
      Uri TOKEN_AUTH_URL = Uri.parse("https://www.google.com/accounts/TokenAuth");

      String url = ISSUE_AUTH_TOKEN_URL.buildUpon().appendQueryParameter("SID", sid)
      .appendQueryParameter("LSID", lsid)
      .build().toString();
      HttpPost getUberToken = new HttpPost(url);
      HttpResponse response = httpClient.execute(getUberToken);
      String uberToken = EntityUtils.toString(entity, "UTF-8");
      String getCookiesUrl = TOKEN_AUTH_URL.buildUpon()
      .appendQueryParameter("source", "android-browser")
      .appendQueryParameter("auth", authToken)
      .appendQueryParameter("continue", TARGET_URL)
      .build().toString();
      HttpGet getCookies = new HttpGet(getCookiesUrl);
      response = httpClient.execute(getCookies);

      CookieStore cookieStore = httpClient.getCookieStore();
      // check for service-specific session cookie
      String adCookie = findCookie(cookieStore.getCookies(), "AD");
      // fail if not found, otherwise get page content
      String responseStr = EntityUtils.toString(entity, "UTF-8");

      This lets us authenticate to the Android Developer Console (version 2) site without requiring user credentials and we can easily proceed to parse the result and use it in a native app (warning: work in progress!) from here. The downside is that for this to work, the user has to grant access twice, for two cryptically looking token types (SID and LSID).

      Of course, after writing all of this, it turns out that the stock Android browser already has code that does it, which we could have used or at least referenced from the very beginning. Better yet, this find leads us to an yet easier way to accomplish our task. 

      The easy way

      The easy way is found right next to the Browser class referenced above, in the DeviceAccountLogin class, so we can't really take any credit for this. It is hardly anything new, but some Googling suggests that it is neither widely known nor used much. You might have noticed that the Android browser is able to silently log you in to Gmail and friends, when you use the mobile site. The way this is implemented is via the 'magic' token type 'weblogin:'. If you use it along with the service name and URL of the site you want to access, it will do all of the steps listed above automatically and instead of a token will give you a full URL you can load to get automatically logged in to your target service. This magic URL is in the format shown below, and includes both the ubertoken and the URL of the target site, as well as the service name (this example is for the Android Developer Console, line is broken for readability):

      https://accounts.google.com/MergeSession?args=service%3Dandroiddeveloper%26continue
      %3Dhttps://play.google.com/apps/publish/v2/&uberauth=APh...&source=AndroidWebLogin

      Here's how to get the MergeSession URL:

      String tokenType = "weblogin:service=androiddeveloper&"
      + "continue=https://play.google.com/apps/publish/v2/";
      String loginUrl = accountManager.getAuthToken(account,tokenType, false, null, null)
      .getResult().getString(AccountManager.KEY_AUTHTOKEN);

      This is again for the Developer Console, but works for any Google site, including Gmail, Calendar and even the account management page. The only problem you might have is finding the service name, which is hardly obvious in some cases (e.g., 'grandcentral' for Google Voice and 'lh2' for Picasa).

      It takes only a single HTTP request form Android to get the final URL, which tells us that the token issuing flow is implemented on the server side. This means that you can also use the Google Play Services client library to issue a weblogin: 'token' (see screenshot below and note that unlike for OAuth 2.0 scopes, it shows the 'raw' token type). Probably goes without saying, but it also means that if you happen to come across someone's accounts.db file, all it takes to log in into their Google account(s) is two HTTPS requests: one to get the MergeSession URL, and one to log in to their accounts page. If you are thinking 'This doesn't affect me, I use Google two-factor authentication (2FA)!', you should know that in this case 2FA doesn't really help. Why? Because since Android doesn't support 2FA, to register an account with the AccountManager you need to use an application specific password (Update: On ICS and later, GLS will actually show a WebView and let you authenticate using your password and OTP. However, the OTP is not required once you get the master token). And once you have entered one, any tokens issued based on it, will just work (until you revoke it), without requiring entering an additional code. So if you value your account, keep your master tokens close and revoke them as soon as you suspect that your phone might be lost or stolen. Better yet, consider a solution that lets you wipe it remotely (which might not work after your revoke the tokens, so be sure to check how it works before you actually need it).


      As we mentioned above, this is all ClientLogin based, which is officially deprecated, and might be going away soon (EOL scheduled for April 2013). But some of the Android Google data sync feeds still depend on ClientLogin, so if you use it you would probably OK for a while. Additionally, since the weblogin: implementation is server-based, it might be updated to conform with the latest (OAuth 2.0-based?) infrastructure without changing the client-side interface. In any case, watch the Android Browser and Chormium code to keep up to date.

      Summary

      Google offers multiple online services, some with both a traditional browser-based interface and a developer-oriented API. Consequently, there are multiple ways to authenticate to those, ranging from form-based username and password login to authentication API's such as ClientLogin and OAuth 2.0. It is relatively straightforward to get an authentication token for services with a public API on Android, either using Android's native AccountManager interface or the newer Google Play Services extension. Getting the required session cookies to login automatically to the Web sites of services that do not offer an API is however neither obvious, nor documented. Fortunately, it is possible and very easy to do if you combine the special 'weblogin:' token type with the service name and the URL of the site you want to use. The best available documentation about this is the Android Browser source code, which uses the same techniques to automatically log you in to Google sites using the account(s) already registered on your device.

      Moral of the story: interoperability is so much easier when you control all parties involved.

      Certificate pinning in Android 4.2

      $
      0
      0
      A lot has happened in the Android world since our last post, with new devices being announced and going on and off sale.  Most importantly, however, Android 4.2 has been released and made its way to AOSP. It's an evolutionary upgrade, bringing various improvements and some new  user and developer features. This time around, security related enhancements made it into the what's new  list, and there is quite a lot of them. The most widely publicized one has been, as expected, the one users may actually see -- application verification. It recently got an in-depth analysis, so in this post we will look into something less visible, but nevertheless quite important -- certificate pinning

      PKI's trust problems and proposed solutions

      In the highly unlikely case that you haven't heard about it, the trustworthiness of the existing public CA model has been severely compromised in the recent couple of years. It has been suspect for a while, but recent high profile CA securitybreaches have brought this problem into the spotlight. Attackers managed to issue certificates for a wide range of sites, including Windows Update servers and Gmail. Not all of those were used (or at least not detected) in real attacks, but the incidents showed just how much of current Internet technology depends on certificates. Fraudulent ones can be used for anything from installing malware to spying to Internet communication, and all that while fooling users that they are using a secure channel or installing a trusted executable. And better security for CA's is not really a solution: major CA's have willingly issuedhundreds of certificated for unqualified names such as localhost, webmail and exchange (here is a breakdown, by number of issued certificates). These could enable eavesdropping on internal corporate traffic by using the certificates for a man-in-the-middle (MITM) attack against any internal host accessed using an unqualified name. And of course there is also the matter of compelled certificate creation, where a government agency could compel a CA to issue a false certificate to be used for intercepting secure traffic (and all this may be perfectly legal). 

      Clearly the current PKI system, which is largely based on a pre-selected set of trusted CA's (trust anchors), is problematic, but what are some of the actual problems? There are different takes on this one, but for starters, there are too many public CA's. As this map by the EFF's SSL Observatory project shows, there are more than public 650 CA's trusted by major browsers. Recent Android versions ship with over one hundred (140 for 4.2) trusted CA certificates and until ICS the only way to remote a trusted certificate was a vendor-initiated OS OTA. Additionally, there is generally no technical restriction to what certificates CA's can issue: as the Comodo and DigiNotar attack have shown, anyone can issue a certificate for *.google.com (name constraints don't apply to root CA's and don't really work for a public CA). Furthermore, since CA's don't publicize what certificates they have issued, there is no way for site operators (in this case Google) to know when someone issues a new, possibly fraudulent, certificate for one of their sites and take appropriate action (certificate transparency standards aims to address this). In short, with the current system if any of the built-in trust anchors is compromised, an attacker could issue a certificate for any site, and neither users accessing it, nor the owner of the site would notice. So what are some of the proposed solutions? 

      Proposed solutions range from radical: scrape the whole PKI idea altogether and replace it with something new and better (DNSSEC is a usual favourite); and moderate: use the current infrastructure  but do not implicitly trust CA's; to evolutionary: maintain compatibility with the current system, but extend it in ways that limit the damage of CA compromise. DNSSEC is still not universally deployed, although the key TLD domains have already been signed. Additionally, it is inherently hierarchical and actually more rigid than PKI, so it doesn't really fit the bill too well. Other even remotely viable solutions have yet to emerge, so we can safely say that the radical path is currently out of the picture. Moving towards the moderate side, some people suggest the SSH model, in which no sites or CA's are initially trusted, and users decide what site to trust on first access. Unlike SSH however, the number of sites that you access directly or indirectly (via CDN's, embedded content, etc.) is virtually unlimited, and user-managed trust is quite unrealistic. Of a similar vein, but much more practical is Moxie Marlinspike's (of sslstrip and CloudCracker fame) Convergence. It is based on the idea of trust agility, a concept he introduced in his SSL And The Future Of Authenticity talk (and related blog post). It both abolishes the browser (or OS) pre-selected trust anchor set, and recognizes that users cannot possibly independently make trust decisions about all the sites they visit. Trust decisions are delegated to a set of notaries, that can vouch for a site by basically confirming that the certificate you receive from a site is one they have seen before. If multiple notaries point out the same certificate as correct, users can be reasonably sure that it is genuine and therefore trustworthy. Convergence is not a formal standard, but was released as actual working code including a Firefox plugin (client) and server-side notary software. While this system is promising, the number of available notaries is currently limited, and Google has publicly stated that it won't add it to Chrome, and it cannot currently be implemented as an extension either (Chrome lacks the necessary API's to let plugins override the default certificate validation module). 

      That leads us to the current evolutionary solutions, which have been deployed to a fairly large user base, mostly courtesy of the Chrome browser. One is certificate blacklisting, which is more of a band-aid solution: in addition to removing compromised CA certificates from the trust anchor set with a browser update, it also explicitly refuses to trust their public keys in order to cover the case where they are manually added to the trust store again. Chrome added blacklisting around the time Comodo was compromised, and Android has this feature since the original Jelly Bean release (4.1). The next one, certificate pinning (more accurately public key pinning), takes the converse approach: it whitelists the keys that are trusted to sign certificates for a particular site. Let's look at it in a bit more detail. 

      Certificate pinning

      Pinning was introduced in Google Chrome 13 in order to limit the CA's that can issue certificates for Google properties. It actually helped discover the MITM attack against Gmail, which resulted from the DigiNotar breach. It is implemented by maintaining a list of public keys that are trusted to issue certificates for a particular DNS name. The list is consulted when validating the certificate chain for a host, and if the chain doesn't include at least one of the whitelisted keys, validation fails. In practice the browser keeps a list of SHA1 hashes of the SubjectPublicKeyInfo (SPKI) field of trusted certificates. Pinning the public keys instead of the actual certificates allows for updating host certificates without breaking validation and requiring pinning information update. You can find the current Chrome list here.

      As you can see, the list now pins non-Google sites as well, such as twitter.com and lookout.com, and is rather large. Including more sites will only make it larger, and it is quite obvious that hard-coding pins doesn't really scale. A couple of new Internet standards have been proposed to help solve this scalability problem: Public Key Pinning Extension for HTTP (PKPE) by Google and Trust Assertions for Certificate Keys (TACK) by Moxie Marlinspike. The first one is simpler and proposes a new HTTP header (Public-Key-Pin, PKP) that holds pinning information including public key hashes, pin lifetime and whether to apply pinning to subdomains of the current host. Pinning information (or simply 'pins') is cached by the browser and used when making trust decisions until it expires. Pins are required to be delivered over a secure (TLS) connection, and the first connection that includes a PKP header is implicitly trusted (or optionally validated against pins built into the client). The protocol also supports an endpoint to report failed validations to via the report-uri directive and allows for a non-enforcing mode (specified with the Public-Key-Pins-Report-Only header), where validation failures are reported, but connections are still allowed. This makes it possible to notify host administrators about possible MITM attacks against their sites, so that they can take appropriate action. The TACK proposal, on the other header, is somewhat more complex and defines a new TLS extension (TACK) that carries pinning information signed with a dedicated 'TACK key'. TLS connections to a pinned hostname require the server to present a 'tack' containing the pinned key and a corresponding signature over the TLS server's public key. Thus both pinning information exchange and validation are carried out at the TLS layer. In contrast, PKPE uses the HTTP layer (over TLS) to send pinning information to clients, but also requires validation to be performed at the TLS layer, dropping the connection if validation against the pins fails. Now that we have an idea how pinning works, let's see how it's implemented on Android.

      Certificate pinning in Android

      As mentioned at beginning of the post, pinning is one of the many security enhancements introduced in Android 4.2. The OS doesn't come with any built-in pins, but instead reads them from a file in the /data/misc/keychain directory (where user-added certificates and blacklists are stored). The file is called, you guessed it, simply pins and is in the following format: hostname=enforcing|SPKI SHA512 hash, SPKI SHA512 hash,.... Here enforcing is either true or false and is followed by a list of SPKI hashes (SHA512) separated by commas. Note that there is no validity period, so pins are valid until deleted. The file is used not only by the browser, but system-wide by virtue of pinning being integrated in libcore. In practice this means that the default (and only) system X509TrustManager implementation (TrustManagerImpl) consults the pin list when validating certificate chains. However there is a twist: the standard checkServerTrusted() method doesn't consult the pin list. Thus any legacy libraries that do not know about certificate pinning would continue to function exactly as before, regardless of the contents of the pin list. This has probably been done for compatibility reasons, and is something to be aware of: running on 4.2 doesn't necessarily mean that you get the benefit of system-level certificate pins. The pinning functionality is exposed to third party libraries or SDK apps via the new X509TrustManagerExtensions SDK class. It has a single method, List<X509Certificate> checkServerTrusted(X509Certificate[] chain, String authType, String host) that returns a validated chain on success or throws a CertificateException if validation fails. Note the last parameter, host. This is what the underlying implementation (TrustManagerImpl) uses to search the pin list for matching pins. If one is found, the public keys in the chain being validated will be checked against the hashes in the pin entry for that host. If none of them matches, validation will fail and you will get a CertificateException. So what part of the system uses the new pinning functionality then? The default SSL engine (JSSE provider), namely the client handshake (ClientHandshakeImpl) and SSL socket (OpenSSLSocketImpl) implementations. They would check their underlying X509TrustManager and if it supports pinning, they will perform additional validation against the pin list. If validation fails, the connection won't be established, thus implementing pin validation on the TLS layer as required by the standards discussed in the previous section. We now know what the pin list is and who uses it, so let's find out how it is created and maintained.

      First off, at the time of this writing, Google-managed (on Nexus devices) JB 4.2 installations have an empty pin list (i.e., the pins file doesn't exist). Thus certificate pinning on Android has not been widely deployed yet. Eventually it will be, but the current state of affairs makes it easier to play with, because restoring to factory state requires simply deleting the pins file and associated metadata (root access required). As you might expect, the pins file is not written directly by the OS. Updating it is triggered by a broadcast (android.intent.action.UPDATE_PINS) that contains the new pins in it's extras. The extras contain the path to the new pins file, its new version (stored in /data/misc/keychain/metadata/version), a hash of the current pins and a SHA512withRSA signature over all the above. The receiver of the broadcast (CertPinInstallReceiver) will then verify the version, hash and signature, and if valid, atomically replace the current pins file with new content (the same procedure is used for updating the premium SMS numbers list). Signing the new pins ensures that they can only by updated by whoever controls the private signing key. The corresponding public key used for validation is stored as a system secure setting under the "config_update_certificate" key (usually in the secure table of the
      /data/data/com.android.providers.settings/databases/settings.db) Just like the pins file, this value currently doesn't exists, so its relatively safe to install your own key in order to test how pinning works. Restoring to factory state requires deleting the corresponding row from the secure table. This basically covers the current pinning implementation in Android, it's now time to actually try it out.

      Using certificate pinning

      To begin with, if you are considering using pinning in an Android app, you don't need the latest and greatest OS version. If you are connecting to a server that uses a self-signed or a private CA-issued certificate, chances you might already be using pinning. Unlike a browser, your Android app doesn't need to connect to practically every possible host on the Internet, but only to a limited number of servers that you know and have control over (limited control in the case of hosted services). Thus you know in advance who issued your certificates and only need to trust their key(s) in order to establish a secure connection to your server(s). If you are initializing a TrustManagerFactory with your own keystore file that contains the issuing certificate(s) of your server's SSL certificate, you are already using pinning: since you don't trust any of the built-in trust anchors (CA certificates), if any of those got compromised your app won't be affected (unless it also talks to affected public servers as well). If you, for some reason, need to use the default trust anchors as well, you can define pins for your keys and validate them after the default system validation succeeds. For more thoughts on this and some sample code (doesn't support ICS and later, but there is pull request with the required changes), refer to this post by Moxie Marlinspike.

      Before we (finally!) start using pinning in 4.2 a word of warning: using the sample code presented below both requires root access and modifies core system files. It does have some limited safety checks, but it might break your system. If you decide to run it, make sure you have a full system backup and proceed with caution.

      As we have seen, pins are stored in a simple text file, so we can just write one up and place it in the required location. It will be picked and used by the system TrustManager, but that is not much fun and is not how the system actually works. We will go through the 'proper' channel instead by creating and sending a correctly signed update broadcast. To do this, we first need to create and install a signing key. The sample app has one embedded so you can just use that or generate and load a new one using OpenSSL (convert to PKCS#8 format to include in Java code). To install the key we need the WRITE_SECURE_SETTINGS permission, which is only granted to system apps, so we must either sign our test app with the platform key (on a self-built ROM) or copy it to /system/app (on a rooted phone with stock firmware). Once this is done we can install the key by updating the "config_update_certificate" secure setting:

      Settings.Secure.putString(ctx.getContentResolver(), "config_update_certificate", 
      "MIICqDCCAZAC...");

      If this is successful we then proceed to constructing our update request. This requires reading the current pin list version (from /data/misc/keychain/metadata/version) and the current pins file content. Initially both should be empty, so we can just start off with 0 and an empty string. We can then create our pins file, concatenate it with the above and sign the whole thing before sending the UPDATE_PINS broadcast. For updates, things are a bit more tricky since the metadata/version file's permissions don't allow for reading by a third party app. We work around this by launching a root shell to get the file contents with cat, so don't be alarmed if you get a 'Grant root?' popup by SuperSU or its brethren. Hashing and signing are pretty straightforward, but creating the new pins file merits some explanation.

      To make it easier to test, we create (or append to) the pins file by connecting to the URL specified in the app and pinning the public keys in the host's certificate chain (we'll use www.google.com in this example, but any host accessible over HTTPS should do). Note that we don't actually pin the host's SSL certificate: this is to allow for the case where the host key is lost or compromised and a new certificate is issued to the host. This is introduced in the PKPE draft as a necessary security trade-off to allow for host certificate updates. Also note that in the case of one (or more) intermediate CA certificates we pin both the issuing certificate's key(s) and the root certificate's key. This is to allow for testing more variations, but is not something you might want to do in practice: for a connection to be considered valid, only one of the keys in the pin entry needs to be in the host's certificate chain. In the case that this is the root certificate's key, connections to hosts with certificates issued by a compromised intermediary CA will be allowed (think hacked root CA reseller). And above all, getting and creating pins based on certificates you receive from a host on the Internet is obviously pointless if you are already the target of a MITM attack. For the purposes of this test, we assume that this is not the case. Once we have all the data, we fire the update intent, and if it checks out the pins file will be updated (watch the logcat output to confirm). The code for this will look something like this (largely based on pinning unit test code in AOSP). With that, it is time to test if pinning actually works.

      URL url = new URL("https://www.google.com");
      HttpsURLConnection conn = (HttpsURLConnection) url.openConnection();
      conn.setRequestMethod("GET");
      conn.connect();

      X509Certificate[] chain = (X509Certificate[])conn.getServerCertificates();
      X509Certificate cert = chain[1];
      String pinEntry = String.format("%s=true|%s", url.getHost(), getFingerprint(cert));
      String contentPath = makeTemporaryContentFile(pinEntry);
      String version = getNextVersion("/data/misc/keychain/metadata/version");
      String currentHash = getHash("/data/misc/keychain/pins");
      String signature = createSignature(content, version, currentHash);

      Intent i = new Intent();
      i.setAction("android.intent.action.UPDATE_PINS");
      i.putExtra("CONTENT_PATH", contentPath);
      i.putExtra("VERSION", version);
      i.putExtra(REQUIRED_HASH", currentHash);
      i.putExtra("SIGNATURE", signature);
      sendBroadcast(i);

      We have now pinned www.google.com, but how to test if the connection will actually fail? There are multiple ways to do this, but to make things a bit more realistic we will launch a MITM attack of sorts by using an SSL proxy. We will use the Burp proxy, which works by generating a new temporary (ephemeral) certificate on the fly for each host you connect to (if you prefer a terminal-based solution, try mitmproxy). If you install Burp's root certificate in Android's trust store and are not using pinning, browsers and other HTTP clients have no way of distinguishing the ephemeral certificate Burp generates from the real one and will happily allow the connection. This allows Burp to decrypt the secure channel on the fly and enables you to view and manipulate traffic as you wish (strictly for research purposes, of course). Refer to the Getting Started page for help with setting up Burp. Once we have Burp all set up, we need to configure Android to use it. While Android does support HTTP proxies, those are generally only used by the built-in browser and it is not guaranteed that HTTP libraries will use the proxy settings as well. Since Android is after all Linux, we can easily take care of this by setting up a 'transparent' proxy that redirects all HTTP traffic to our chosen host by using iptables. If you are not comfortable with iptables syntax or simply prefer an easy to use GUI, there's an app for that as well: Proxy Droid. After setting up Proxy Droid to forward packets to our Burp instance we should have all Android traffic flowing through our proxy. Open a couple of pages in the browser to confirm before proceeding further (make sure Burp's 'Intercept' button is off if traffic seems stuck).

      Finally time to connect! The sample app allows you to test connection with both of Android's HTTP libraries (HttpURLConnection and Apache's HttpClient), just press the corresponding 'Check w/ ...' button. Since validation is done at the TLS layer, the connection shouldn't be allowed and you should see something like this (the error message may say 'No peer certificates' for HttpClient; this is due to the way it handles validation errors):



      If you instead see a message starting with 'X509TrustManagerExtensions verify result: Error verifying chain...', the connection did go through but our additional validation using the X509TrustManagerExtensions class detected the changed certificate and failed. This shouldn't happen, right? It does though because HTTP clients cache connections (SSLSocket instances, which in turn each hold a X509TrustManager instance, which only reads pins when created). The easiest way to make sure pins are picked up is to reboot the phone after you pin your test host. If you try connecting with the Android browser after rebooting (not Chrome!), you will be greeted with this message:


      As you can see the certificate for www.google.com is issued by our Burp CA, but it might as well be from DigiNotar: if the proper public keys are pinned, Android should detected the fraudulent host certificate and show a warning. This works because the Android browser is using the system trust store and pins via the default TrustManager, even though it doesn't use JSSE SSL sockets. Connecting with Chrome on the other hand works fine even though it does have built-in pins for Google sites: Chrome allows manually installed trust anchors to override system pins so that tools such as Burp or Fiddler continue to work (or pinning is not yet enabled on Android, which is somewhat unlikely).


      So there you have it: pinning on Android works. If you look at the sample code, you will see that we have created enforcing pins and that is why we get connection errors when connecting through the proxy. If you set the enforcing parameter to false instead, connection will be allowed, but chains that failed validation will still be recorded to the system dropbox (/data/system/dropbox) in cert_pin_failure@timestamp.txt files, one for each validation failure.

      Summary

      Android adds certificate pinning by keeping a pin list with an entry for each pinned DNS name. Pin entries include a host name, an enforcing parameter and a list of SPKI SHA512 hashes of the of keys that are allowed to sign a certificate for that host. The pin list is updated by sending a broadcast with signed update data. Applications using the default HTTP libraries get the benefit of system-level pinning automatically or can explicitly check a certificate chain against the pin list by using the X509TrustManagerExtensions SDK class. Currently the pin list is empty, but the functionality is available now and once pins for major sites are deployed this will add another layer of defense against MIMT attacks that follow after a CA has been compromised.

      Secure USB debugging in Android 4.2.2

      $
      0
      0
      It seems we somehow managed to let two months slip by without a single post. Time to get back on track, and the recently unveiled Android maintenance release provides a nice opportunity to jump start things. Official release notes for Android 4.2.2 don't seem to be available at this time, but it made its way into AOSP quite promptly, so you can easily compile your own changelog based on git log messages. Or, you can simply check the now traditional one over at Funky Android. As you can see, there are quite a few changes, and if you want a higher level overview your time would probably be better spent reading some of the related posts by the usual suspects. Deviating from our usually somewhat obscure topics, we will focus on a new security feature that is quite visible and has received a fair bit of attention already. It was even introduced on the official Android Developers Blog, fortunately for us only in brief. As usual, we like to dig a little deeper, so if you are interested in more details about the shiny new secure debugging feature, read on.

      Why bother securing debugging?

      If you have done development in any programming environment, you know that 'debugging' is usually the exact opposite of 'secure'. Debugging typically involves inspecting (and sometimes even changing) internal program state, dumping encrypted communication data to log files, universal root access and other scary, but necessary activities. It is hard enough without having to bother with security, so why further complicate things by making developers jump through security hoops? As it turns out, Android debugging, as provided by the Android Debug Bridge (ADB), is quite versatile and gives you almost complete control over a device when enabled. This is, of course, is very welcome if you are developing or testing an application (or the OS itself), but can also be used for other purposes. Before we give an overview of those, here is a (non-exhaustive) list of things ADB lets you do:
      • debug apps running on the device (using JWDP)
      • install and remove apps
      • copy files to and from the device
      • execute shell commands on the device
      • get the system and apps logs
      If debugging is enabled on a device, you can do all of the above and more simply by connecting the device to a computer with an USB cable. If you think that's not much of a problem because the device is locked, here's some bad news: you don't have to unlock the device in order to execute ADB commands. And it gets worse -- if the device is rooted (as are many developer devices), you can access and change every single file, including system files and password databases. Of course, that is not the end of it: you don't actually need a computer with development tools in order to do this: another Android device and an OTG USB cable are sufficient. Security researchers, most notably Kyle Osborn, have build tools (there's even a GUI) that automate this and try very hard to extract as much data as possible from the device in a very short time. As we mentioned, if the device is rooted all bets are off -- it is trivial to lift all of your credentials, disable or crack the device lock and even log into your Google account(s). But even without root, anything on external storage (SD card) is accessible (for example your precious photos), as are your contacts and text messages. See Kyle's presentations for details and other attack vectors.

      By now you should be at least concerned about leaving ADB access wide open, so let's see what are some ways to secure it.

      Securing ADB

      Despite some innovative attacks, none of the above is particularly new, but it has remained mostly unaddressed, probably because debugging is a developer feature regular users don't even know about. There have been some third-party solutions though, so let's briefly review those before introducing the one implemented in the core OS. Two of the more popular apps that allow you to control USB debugging are ADB Toggle and AdbdSecure. They automatically disable ADB debugging when the device is locked or unplugged, and enable it again when you unlock it or plug in the USB cable. This is generally sufficient protection, but has one major drawback -- starting and stopping the adbd daemon requires root access. If you want to develop and test apps on a device with stock firmware, you still have to disable debugging manually. Root access typically goes hand-in-hand with running custom firmware -- you usually need root access to flash a new ROM version (or at least it makes it much easier) and some of the apps shipping with those ROMs take advantage of root access to give you extra features not available in the stock OS (full backup, tethering, firewalls, etc.). As a result of this, custom ROMs have traditionally shipped with root access enabled (typically in the form of a SUID su binary and an accompanying 'Superuser' app). Thus, once you installed your favourite custom ROM you were automatically 'rooted'. CyanogenMod (which has over a million users and growing) changed this almost a year ago by disabling root access in their ROMs and giving you the option to enable it for apps only, for ADB of for both. This is not a bad compromise -- you can both run root apps and have ADB enabled without exposing your device too much, and it can be used in combination with an app that automates toggling ADB for even more control. Of course, these solutions don't apply to the majority of Android users -- those running stock OS versions.

      The first step in making ADB access harder to reach was taken in Android 4.2 which hid the 'Developer options' settings screen, requiring you to use a secret knock in order to enable it. While this is mildly annoying for developers, it makes sure that most users cannot enable ADB access by accident. This is, of course, only a stop-gap measure, and once you manage to turn USB debugging on, your device is once again vulnerable. A proper solution was introduced in the 4.2.2 maintenance release with the so called 'secure USB debugging' (it was actually commited almost a year ago, but for some reason didn't make it into the original JB release). 'Secure' here refers to the fact that only hosts explicitly authorized by the user can now connect to the adbd daemon on the device and execute debugging commands. Thus if someone tries to connect a device to another one via USB in order to access ADB, they need to first unlock the target device and authorize access from the debug host by clicking 'OK' in the confirmation dialog shown below. You can make your decision persistent by checking the 'Always allow from this computer' and debugging will work just as before, as long as you are on the same machine. One thing to note is that on tablets with multi-user support the confirmation dialog is only shown to the primary (administrator) user, so you will need to switch to it in order to enable debugging. Naturally this 'secure debugging' is only effective if you have a reasonably secure lock screen password in place, but everyone has on of those, right? That's pretty much all you need to know in order to secure your developer device, but if you are interested in how all of this is implemented under the hood, proceed to the next sections. We will first a give a very brief overview of the ADB architecture and then show how it has been extended in order to support authenticated debugging.


      ADB overview

      The Android Debug Bridge serves two main purposes: it keeps track of all devices (or emulators) connected to a host, and it offers various services to its clients (command line clients, IDEs, etc.). It consists of three main components: the ADB server, the ADB daemon (adbd) and the default command line client (adb). The ADB server runs on the host machine as a background process and decouples clients from the actual devices or emulators. It monitors device connectivity and sets their state appropriately (CONNECTED, OFFLINE, RECOVERY, etc.). The ADB daemon runs on an Android device (or emulator) and provides the actual services client use. It connects to the ADB server through USB or TCP/IP, and receives and process commands from it. Finally, adb is the command line client that lets you send commands to a particular device. In practice it is implemented in the same binary as the ADB server and thus shares much of its code.

      The client talks to the local ADB server via TCP (typically via localhost:5037) using text based commands, and receives OK or FAIL responses in return. Some commands, like enumerating devices, port forwarding or daemon restart are handled by the local daemon, and some (e.g., shell or log access) naturally require a connection to the target Android device. Device access is generally accomplished by forwarding input and output streams to/from the host. The transport layer that implements this uses simple messages with a 24 byte header and an optional payload to exchange commands and responses. We will not go into further details about those, but will only note the newly added authentication commands in the next section. For more details refer to the protocol description in system/core/adb/protocol.txt and this presentation which features quite a few helpful diagrams and examples.

      Secure ADB implementation

      The ADB host authentication functionality is enabled by default when the ro.adb.secure system property is set to 1, and there is no way to disable it via the system settings interface (which is a good thing). The device is initially in the OFFLINE state and only goes into the ONLINE state once the host has authenticated. As you may already know, hosts use RSA keys in order to authenticate to the ADB daemon on the device. Authentication is typically a three step process:
      1. After a host tries to connect, the device sends and AUTH message of type TOKEN that includes a 20 byte random value (read from /dev/urandom).
      2. The host responds with a SIGNATURE packet that includes a SHA1withRSA signature of the random token with one of its private keys.
      3. The device tries to verify the received signature, and if signature verification succeeds, it responds with a CONNECT message and goes into the ONLINE state. If verification fails, either because the signature value doesn't match or because there is no corresponding public key to verify with, the device sends another AUTH TOKEN with a new random value, so that the host can try authenticating again (slowing down if the number of failures goes over a certain threshold).
      Signature verification typically fails the first time you connect the device to a new host because it doesn't yet have the host key. In that case the host sends its public key in an AUTH RSAPUBLICKEY message. The device takes the MD5 hash of that key and displays it in the 'Allow USB debugging' confirmation dialog. Since adbd is a native daemon, the key needs to be passed to the main Android OS. This is accomplished by simply writing the key to a local socket (aptly named, 'adbd'). When you enable ADB debugging from the developer settings screen, a thread that listens to the 'adbd' socket is started. When it receives a message starting with "PK" it treats it as a public key, parses it, calculates the MD5 hash and displays the confirmation dialog (an activity actually, part of the SystemUI package). If you tap 'OK', it sends a simple simple "OK" response and adbd uses the key to verify the authentication message (otherwise it just stays offline). In case you check the 'Always allow from this computer' checkbox, the public key is written to disk and automatically used for signature verification the next time you connect to the same host. The allow/deny debugging functionality, along with starting/stopping the adbd daemon, is exposed as public methods of the UsbDeviceManager system service.

      We've described the ADB authentication protocol in some detail, but haven't said much about the actual keys used in the process. Those are 2048-bit RSA keys and are generated by the local ADB server. They are typically stored in $HOME/.android as adbkey and adbkey.pub. On Windows that usually translates to %USERPOFILE%\.android, but keys might end up in C:\Windows\System32\config\systemprofile\.android in some cases (see issue 49465). The default key directory can be overridden by setting the ANDROID_SDK_HOME environment variable. If the ADB_VENDOR_KEYS environment variable is set, the directory it points to is also searched for keys. If no keys are found in any of the above locations, a new key pair is generated and saved. On the device, keys are stored in the /data/misc/adb/adb_keys file, and new authorized keys are appended to the same file as you accept them. Read-only 'vendor keys' are stored in the /adb_keys file, but it doesn't seem to exist on current Nexus devices. The private key is in standard OpenSSL PEM format, while the public one consists of the Base 64 encoded key followed by a `user@host` user identifier, separated by space. The user identifier doesn't seem to be used at the moment and is only meaningful on Unix-based OS'es, on Windows it is always 'unknown@unknown'. 

      While the USB debugging confirmation dialog helpfully displays a key fingerprint to let you verify you are connected to the expected host, the adb client doesn't have a handy command to print the fingerprint of the host key. You might think that there is little room for confusion: after all there is only one cable plugged to a single machine, but if you are running a couple of VMs, thing can get a little fuzzy. Here's one of way of displaying the host key's fingerprint in the same format the confirmation dialog uses (run in $HOME/.android or specify the full path to the public key file):

      awk '{print $1}' < adbkey.pub|openssl base64 -A -d -a \
      |openssl md5 -c|awk '{print $2}'|tr '[:lower:]' '[:upper:]'

      We've reviewed how secure ADB debugging is implemented and have shown why it is needed, but just to show that all of this solves a real problem, we'll finish off with a screenshot of what a failed ADB attack against an 4.2.2 device from another Android device looks like:


      Summary

      Android 4.2.2 finally adds a means to control  USB access to the ADB daemon by requiring debug hosts to be explicitly authorized by the user and added to a whitelist. This helps prevent information extraction via USB which requires only brief physical access and has been demonstrated to be quite effective. While secure debugging is not a feature most users will ever use directly, along with full disk encryption and a good screen lock password, it goes a long way towards making developer devices more secure. 

      Android code signing

      $
      0
      0
      We covered a new security feature introduced in the last Jelly Bean maintenance release in our last post and, before you know it, a new tag has already popped up in AOSP. Google I/O is just around the corner, and some interesting bits and pieces are trickling into the AOSP master branch, so it's probably time for a new post. There are plenty of places where you can get your rumour fix regarding I/O 2013 and it looks like build JDQ39E is going to be somewhat boring, so we will explore something different instead: code signing. This particular aspect of Android has remained virtually unchanged since the first public release, and is so central to the platform, that is pretty much taken for granted. While neither Java code signing, nor its Android implementation are particularly new, some of the finer details are not particularly well-known, so we'll try to shed some more light on those. The first post of the series will concentrate on the signature formats used while the next one will look into how code signing fits into Android's security model.

      Java code signing

      As we all know, Android applications are coded (mostly) in Java, and Android application package files (APKs) are just weird-looking JARs, so it pays to understand how JAR signing works first. 

      First off, a few words about code signing in general. Why would anyone want to sign code? For the usual reasons: integrity and authenticity. Basically, before executing any third-party program you want to make sure that it hasn't been tampered with (integrity) and that it was actually created by the entity that it claims to come from (authenticity). Those features are usually implemented by some digital signature scheme, which guarantees that only the entity owning the signing key can produce a valid code signature. The signature verification process verifies both that the code has not been tampered with and that the signature was produced with the expected key. One problem that code signing doesn't solve directly is whether the code signer (software publisher) can be trusted. The usual way trust is handled is by requiring the code signer to hold a digital certificate, which they attach to the signed code. Verifiers decide whether to trust the certificate either based on some trust model (e.g., PKI or web of trust), or on a case-by-case basis. Another problem that code signing does not solve (or event attempt to) is whether the signed code is safe to run. As we have seen, code that has been signed (or appears to be) by a trusted third party is not necessarily safe (e.g., Flame or pwdump7).

      Java's native code packaging format is the JAR file, which is essentially a ZIP file bundling together code (.class files or classes.dex in Android), some metadata about the package (.MF manifest files in the META-INF/ directory) and, optionally, resources the code uses. The main manifest file (MANIFEST.MF) has entries with the file name and digest value of each file in the archive. The start of the manifest file of a typical APK file is show below (we'll use APKs instead of actual JARs for all examples).

      Manifest-Version: 1.0
      Created-By: 1.0 (Android)

      Name: res/drawable-xhdpi/ic_launcher.png
      SHA1-Digest: K/0Rd/lt0qSlgDD/9DY7aCNlBvU=

      Name: res/menu/main.xml
      SHA1-Digest: kG8WDil9ur0f+F2AxgcSSKDhjn0=

      Name: ...

      Java code signing is implemented at the JAR file level by adding another manifest file, called a signature file (.SF) which contains the data to be signed, and a digital signature over it (called a 'signature block file', .RSA, .DSA or .EC). The signature file is very similar to the manifest, and contains the digest of the whole manifest file (SHA1-Digest-Manifest), as well as digests for each of the individual entries in MANIFEST.MF.

      Signature-Version: 1.0
      SHA1-Digest-Manifest-Main-Attributes: ZKXxNW/3Rg7JA1r0+RlbJIP6IMA=
      Created-By: 1.6.0_45 (Sun Microsystems Inc.)
      SHA1-Digest-Manifest: zb0XjEhVBxE0z2ZC+B4OW25WBxo=

      Name: res/drawable-xhdpi/ic_launcher.png
      SHA1-Digest: jTeE2Y5L3uBdQ2g40PB2n72L3dE=

      Name: res/menu/main.xml
      SHA1-Digest: kSQDLtTE07cLhTH/cY54UjbbNBo=

      Name: ...

      The digests in the signature file can easily be verified by using the following OpenSSL commands:

      $ openssl sha1 -binary MANIFEST.MF |openssl base64
      zb0XjEhVBxE0z2ZC+B4OW25WBxo=
      $ echo -en "Name: res/drawable-xhdpi/ic_launcher.png\r\nSHA1-Digest: \
      K/0Rd/lt0qSlgDD/9DY7aCNlBvU=\r\n\r\n"|openssl sha1 -binary |openssl base64
      jTeE2Y5L3uBdQ2g40PB2n72L3dE=

      The first one takes the SHA1 digest of the entire manifest file and encodes it to Base 64 to produce the SHA1-Digest-Manifest value, and the second one simulates how the digest of a single manifest entry is being calculated. The actual digital signature is in binary PKCS#7 (or more generally, CMS) format and includes the signature value and signing certificate. Signature block files produced using the RSA algorithm are saved with the extension .RSA, those generated with DSA or EC keys with the .DSA or .EC extensions, respectively. Multiple signatures can be performed, resulting in multiple .SF and .RSA/DSA/EC files in the JAR file's META-INF/ directory. The CMS format is rather involved, allowing not only for signing, but for encryption as well, both with different algorithms and parameters, and is extensible via custom signed or unsigned attributes. A thorough discussion is beyond the scope of this post, but as used for JAR signing it basically contains the digest algorithm, signing certificate and signature value. Optionally the signed data can be included in the SignedData CMS structure (attached signature), but JAR signatures don't include it (detached signature). Here's how an RSA signature block file looks like when parsed into ASN.1 (certificate info trimmed):

      $ openssl asn1parse -i -inform DER -in CERT.RSA
      0:d=0 hl=4 l= 888 cons: SEQUENCE
      4:d=1 hl=2 l= 9 prim: OBJECT :pkcs7-signedData
      15:d=1 hl=4 l= 873 cons: cont [ 0 ]
      19:d=2 hl=4 l= 869 cons: SEQUENCE
      23:d=3 hl=2 l= 1 prim: INTEGER :01
      26:d=3 hl=2 l= 11 cons: SET
      28:d=4 hl=2 l= 9 cons: SEQUENCE
      30:d=5 hl=2 l= 5 prim: OBJECT :sha1
      37:d=5 hl=2 l= 0 prim: NULL
      39:d=3 hl=2 l= 11 cons: SEQUENCE
      41:d=4 hl=2 l= 9 prim: OBJECT :pkcs7-data
      52:d=3 hl=4 l= 607 cons: cont [ 0 ]
      56:d=4 hl=4 l= 603 cons: SEQUENCE
      60:d=5 hl=4 l= 452 cons: SEQUENCE
      64:d=6 hl=2 l= 3 cons: cont [ 0 ]
      66:d=7 hl=2 l= 1 prim: INTEGER :02
      69:d=6 hl=2 l= 1 prim: INTEGER :04
      72:d=6 hl=2 l= 13 cons: SEQUENCE
      74:d=7 hl=2 l= 9 prim: OBJECT :sha1WithRSAEncryption
      85:d=7 hl=2 l= 0 prim: NULL
      87:d=6 hl=2 l= 56 cons: SEQUENCE
      89:d=7 hl=2 l= 11 cons: SET
      91:d=8 hl=2 l= 9 cons: SEQUENCE
      93:d=9 hl=2 l= 3 prim: OBJECT :countryName
      98:d=9 hl=2 l= 2 prim: PRINTABLESTRING :JP
      ...
      735:d=5 hl=2 l= 9 cons: SEQUENCE
      737:d=6 hl=2 l= 5 prim: OBJECT :sha1
      744:d=6 hl=2 l= 0 prim: NULL
      746:d=5 hl=2 l= 13 cons: SEQUENCE
      748:d=6 hl=2 l= 9 prim: OBJECT :rsaEncryption
      759:d=6 hl=2 l= 0 prim: NULL
      761:d=5 hl=3 l= 128 prim: OCTET STRING [HEX DUMP]:892744D30DCEDF74933007...

      If we extract the contents of a JAR file, we can use the OpenSSL smime (CMS is the basis of S/MIME) command to verify its signature by specifying the signature file as the content (signed data). It will print the signed data and the verification result:

      $ openssl smime -verify -in CERT.RSA -inform DER -content CERT.SF signing-cert.pem
      Signature-Version: 1.0
      SHA1-Digest-Manifest-Main-Attributes: ZKXxNW/3Rg7JA1r0+RlbJIP6IMA=
      Created-By: 1.6.0_43 (Sun Microsystems Inc.)
      SHA1-Digest-Manifest: zb0XjEhVBxE0z2ZC+B4OW25WBxo=

      Name: res/drawable-xhdpi/ic_launcher.png
      SHA1-Digest: jTeE2Y5L3uBdQ2g40PB2n72L3dE=

      ...
      Verification successful

      The official tools for JAR signing and verification are the jarsigner and keytool commands from the JDK. Since Java 5.0 jarsigner also supports timestamping the signature by a TSA, which could be quite useful when you need to ascertain the time of signing (e.g., before or after the signing certificate expired), but this feature is not widely used. Using the jarsigner command, a JAR file is signed by specifying a keystore file, the alias of the key to use for signing (used as the base name for the signature block file) and, optionally, a signature algorithm. One thing to note is that since Java 7, the default algorithm has changed to SHA256withRSA, so you need to explicitly specify it if you want to use SHA1. Verification is performed in a similar fashion, but the keystore file is used to search for trusted certificates, if specified. (again using an APK file instead of an actual JAR):

      $ jarsigner -keystore debug.keystore -sigalg SHA1withRSA test.apk androiddebugkey
      $ jarsigner -keystore debug.keystore -verify -verbose -certs test.apk
      ....

      smk 965 Mon Apr 08 23:55:34 JST 2013 res/drawable-xxhdpi/ic_launcher.png

      X.509, CN=Android Debug, O=Android, C=US (androiddebugkey)
      [certificate is valid from 6/18/11 7:31 PM to 6/10/41 7:31 PM]

      smk 458072 Tue Apr 09 01:16:18 JST 2013 classes.dex

      X.509, CN=Android Debug, O=Android, C=US (androiddebugkey)
      [certificate is valid from 6/18/11 7:31 PM to 6/10/41 7:31 PM]

      903 Tue Apr 09 01:16:18 JST 2013 META-INF/MANIFEST.MF
      956 Tue Apr 09 01:16:18 JST 2013 META-INF/CERT.SF
      776 Tue Apr 09 01:16:18 JST 2013 META-INF/CERT.RSA

      s = signature was verified
      m = entry is listed in manifest
      k = at least one certificate was found in keystore
      i = at least one certificate was found in identity scope

      jar verified.

      The last command verifies the signature block and signing certificate, ensuring that the signature file has not been tampered with. It then verifies that each digest in the signature file (CERT.SF) matches its corresponding section in the manifest file (MANIFEST.MF). One thing to note is that the number of entries in the signature file does not necessarily have to match those in the manifest file. Files can be added to a signed JAR without invalidating its signature: as long as none of the original files have been changed, verification succeeds. Finally, jarsigner reads each manifest entry and checks that the file digest matches the actual file contents. Optionally, it checks whether the signing certificate is present in the specified key store (if any). As of Java 7 there is a new -strict option that will perform additional certificate validations. Validation errors are treated as warnings and reflected in the exit code of the jarsigner command. As you can see, it prints certificate details for each entry, even though they are the same for all entries. A slightly better way to view signer info when using Java 7 is to specify the -verbose:summary or -verbose:grouped, or alternatively use the keytool command:

      $ keytool -list -printcert -jarfile test.apk
      Signer #1:

      Signature:

      Owner: CN=Android Debug, O=Android, C=US
      Issuer: CN=Android Debug, O=Android, C=US
      Serial number: 4dfc7e9a
      Valid from: Sat Jun 18 19:31:54 JST 2011 until: Mon Jun 10 19:31:54 JST 2041
      Certificate fingerprints:
      MD5: E8:93:6E:43:99:61:C8:37:E1:30:36:14:CF:71:C2:32
      SHA1: 08:53:74:41:50:26:07:E7:8F:A5:5F:56:4B:11:62:52:06:54:83:BE
      Signature algorithm name: SHA1withRSA
      Version: 3

      Once you know the signature block file name (by listing the archive contents, for example), you can also use OpenSSL in combination with the zip command to easily extract the signing certificate to a file:

      $ unzip -q -c test.apk META-INF/CERT.RSA|openssl pkcs7 -inform DER -print_certs -out cert.pem

      Android code signing

      As evident from the examples above, Android code signing is based on Java JAR signing and you can use the regular JDK tools to sign or verify APKs. Besides those, there is an Android specific tool in the AOSP build/ directory, aptly named signapk. It performs pretty much the same task as jarsigner in signing mode, but there are also a few notable differences. To start with, while jarsigner requires keys to be stored in a compatible key store file, signapk takes separate signing key (in PKCS#8 format) and certificate (in DER format) files as input. While it does appear to have some support for reading DSA keys, it can only produce signatures with the SHA1withRSA mechanism. Raw private keys in PKCS#8 are somewhat hard to come by, but you can easily generate a test key pair and a self-signed certificate using the make_key found in development/tools. If you have existing OpenSSL keys you cannot use them as is however, you will need to convert them using OpenSSL's pkcs8 command:

      echo "keypwd"|openssl pkcs8 -in mykey.pem -topk8 -outform DER -out mykey.pk8 -passout stdin

      Once you have the needed keys, you can sign an APK like this:

      $ java -jar signapk.jar cert.cer key.pk8 test.apk test-signed.apk

      Nothing new so far, except the somewhat exotic (but easily parsable by JCE classes) key format. However, the signapk has an extra 'sign whole file' mode, enabled with the -w option. When in this mode, in addition to signing each individual JAR entry, the tool generates a signature over the whole archive as well. This mode is not supported by jarsigner and is specific to Android. So why sign the whole archive when each of the individual files is already signed? In order to support over the air updates (OTA), naturally :). If you have ever flashed a custom ROM, or been impatient and updated your device manually before it picked up the official update broadcast, you know that OTA packages are ZIP files containing the updated files and scripts to apply them. It turns out, however, that they a lot more like JAR files on the inside. They come with a META-INF/ directory, manifests and a signature block, plus a few other extras. One of those is the /META-INF/com/android/otacert file, which contains the update signing certificate (in PEM format). Before booting into recovery to actually apply the update, Android will verify the package signature, then check that the signing certificate is one that is trusted to sign updates. OTA trusted certificates are completely separate from the 'regular' system trust store, and reside in a, you guessed it, a ZIP file, usually stored as /system/etc/security/otacerts.zip. On a production device it will typically contain a single file, likely named releasekey.x509.pem.

      Going back to the original question, if OTA files are JAR files, and JAR files don't support whole-file signatures, where does the signature go? The Android signapk tool slightly abuses the ZIP format by adding a null-terminated string comment in the ZIP comment section, followed by the binary signature block and a 6-byte final record, containing the signature offset and the size of the entire comment section. This makes it easy to verify the package by first reading and verifying the signature block from the end of the file, and only reading the rest of the file (which for a major upgrade might be in the hundreds of MBs) if the signature checks out. If you want to manually verify the package signature with OpenSSL, you can separate the signed data and the signature block with a script like the one below, where the second argument is the signature block file, and the third one is the signed ZIP file (without the comments section) to write:

      #!/bin/env python

      import os
      import sys
      import struct

      file_name = sys.argv[1]
      file_size = os.stat(file_name).st_size

      f = open(file_name, 'rb')
      f.seek(file_size - 6)
      footer = f.read(6)

      sig_offset = struct.unpack('<H', footer[0:2])
      sig_start = file_size - sig_offset[0]
      sig_size = sig_offset[0] - 6
      f.seek(sig_start)
      sig = f.read(sig_size)

      f.seek(0)
      # 2 bytes comment length + 18 bytes string comment
      sd = f.read(file_size - sig_offset[0] - 2 - 18)
      f.close()

      sf = open(sys.argv[2], 'wb')
      sf.write(sig)
      sf.close()

      zf = open(sys.argv[3], 'wb')
      zf.write(sd)
      zf.close()

      Summary

      Android relies heavily on the Java JAR format, both for application packages (APKs) and for system updates (OTA packages). APK signing uses a subset of the JAR signing specification as is, while OTA packages use a custom format that generates a signature over the whole file. Standalone package verification can be performed with standard JDK tools or OpenSSL (after some preprocessing). The Android OS and recovery system follow the same verification procedures before installing APKs or applying system updates. In the next article we will explore how the OS uses package signatures and how they fit into Android's security model. 

      Code signing in Android's security model

      $
      0
      0
      In the previous post we introduced code signing as implemented in Android and saw that it is practically identical to JAR signing. Android requires all installed packages to be signed and makes heavy use of the attached code signing certificates in its security model. This is where the major differences with other platforms that use code signing lie, so we will explore the topic in more detail.

      Java access control

      Before we start digging into Android's security model, let's go through a quick overview of the corresponding features of the Java platform. Java was initially designed to support running potentially untrusted code, downloaded from a public network (mostly applets). The initial applet sandbox model was extended to a more flexible, policy-based scheme where different permissions can be granted based on the code's origin and author. Code origin refers to the place where classes are loaded from, typically a local file or a remote URL, while authorship is asserted via code signatures and is represented by the signer's certificate chain. Combined those two properties define a code source. Each code source is granted a set of permissions based on a policy, the default implementation being to read rules from a policy file (created with the policytool). At runtime a security manager (if installed) enforces access control by comparing code elements on the stack with the current policy. It throws a SecurityException if the permissions required to access a resource have not been granted to the requesting code source. Java code that runs (or is started in) the browser, such as applets or Java Web Start applications, is automatically run with a security manager installed, while for local applications you need to explicitly set the java.security.manager in order to install one. In practice, a security manager for local code is only used with some applications servers, and it is usually disabled by default. A wide range of permissions are supported by the platform, the major ones being file and socket-oriented, as well as different types of runtime permissions which control operations ranging from class and library loading to managing the current security manager. By defining multiple code sources and assigning each one specific permissions one can implement fine grained access control for both local and remote code.

      As we mentioned though, unless you are in the browser plugin or application server development business chances are you hadn't heard about any of this until the beginning of this year. Just when everyone thought that Java applets were for all intents and purposes dead, they made somewhat of a comeback as a malware distribution medium. A series of vulnerabilities were discovered in the Oracle Java implementation that allow applets to escape the sandbox they run in, and reset the security manager, effectively granting themselves full privileges. The exploits used to achieve this employ techniques ranging from reflection recursion to direct memory manipulation to bypass runtime security checks. Oracle has responded by releasing a series of patches and changing the default applet execution policy and introducing more visible warnings to let users know that potentially harmful code is being executed. Naturally, differentways to bypass this are being discovered to catch up.

      In short, Java has had full-featured code access control for some time, even though the most widely used implementation appears to be lacking in enforcing it. But let's (finally!) get back to Android now. As the Java code access control mechanism can use code signer identity to define code sources and grant permissions, and Android code is required to be signed, one might expect that our favourite mobile OS would be making use of the Java's security model in some form, just as it does with JAR files. As it turns out, this is not the case. Access control relatedclasses are part of the Java API, and are indeed available in Android. However, looking at the implementation reveals that they are practically empty, with just enough code to compile. In addition, they feature a prominent 'Legacy security code; do not use.' notice. So why bother reviewing all of the above then? Even though Android's access control model is very different from the legacy Java one, it does borrow some of the same ideas, and a comparison is helpful when discussing the design decisions made.

      Android security architecture basics

      Before we discuss the role of code signing in Android's security model, let's say a few words about Android's general security architecture. As we know, Android is Linux-based and relies heavily on traditional UNIX features to implement its security architecture. Each application runs in a separate process with a distinct identity (user ID, UID). By default apps cannot modify each other's resources and this is enforced by Linux which doesn't allow different processes to access memory or files they don't own (unless access is explicitly granted by the owner, a.k.a discretionary access control). Additionally, each app (UID) is granted a set of logical permissions at install time, and cannot perform operations (call APIs) that require permissions it doesn't have. This is the biggest difference compared to the 'standard' Java permission model: code from different sources running in a single process cannot have different permissions, since permissions are granted at the UID level. Most permissions cannot be dynamically granted after the package has been installed, however as of 4.2 a number of 'development' permissions (e.g., READ_LOGS, WRITE_SECURE_SETTINGS) have been introduced that can be granted or revoked on demand using the pm grant/revoke command (or matching system APIs). The system will show a confirmation dialog showing permissions requested by an app before installing. With the exception of the new 'development' permissions, all requested permissions are permanently granted if the the user allows the install. For a certain messaging app it looks like this in Jelly Bean:



      Android permissions are typically implemented by mapping them to Linux groups that have the necessary read/write access to relevant system resources (files or sockets) and thus are ultimately enforced by the Linux kernel. Some permissions are enforced by system daemons or services by explicitly checking if the calling UID is whitelisted to perform a particular operation. The network access permission (INTERNET) is somewhat of a hybrid: it is mapped to a group (inet), but since network access is not associated with one particular socket, the kernel checks whether processes trying to open a socket are members of the inet group on each related system call (known as 'paranoid network security').

      Each permission has an associated 'protection level' that indicates how the system proceeds when deciding whether to grant or deny the permission. The two levels most relevant to our discussion are signature and signatureOrSystem. The former is granted only to apps signed with the same certificate as the package declaring the permission, while the latter is granted to apps that are in the Android system image, even if the signer is different.

      Besides the built-in permissions, custom permissions can also be defined by declaring them in the app manifest file. Those can be enforced statically by the system or dynamically by app components. Permissions attached to components (activities, services, broadcast receivers or content providers) defined in AndroidManifest.xml are automatically enforced by the system. Components can also make use of framework APIs to check whether the calling UID has been granted a required permissions on a case-by-case basis (e.g., only for write operations, etc.). We will introduce other permission related details as necessary later, but you can refer to this Marakanapresentation for a more complete and thorough discussion of Android permissions (and more). Of course, someofficialdocumentation is also available.

      The role of code signing

      As we saw in the previous article, Android code signing is based on Java JAR signing. Consequently, it uses public key cryptography and X.509 certificates as do a lot of other code signing schemes. However, this is where the similarities end. In practically all other platforms that use code signing (for example Java ME), code signing certificate needs to be issued by a CA that the platform trusts. While there is no lack of CAs that issue code signing certificates, in reality it is quite difficult to obtain a certificate that will be trusted by all targeted devices. Android solves this problem quite simply: it doesn't care about the actual signing certificate. Thus you do not need to have it issued by a CA (although you could, and most will happily take your money), and virtually all code signing certificates used in Android are self-signed. Additionally, you don't need to assert your identity in any way: you can use pretty much anything as the subject name (the Google Play store does have a few checks to weed out some common names, but not the OS itself). Signing certificates are treated as binary blobs by Android, and the fact that they are in X.509 format is merely a consequence of using the JAR format. Android doesn't validate certificates as such: if the certificate is not self-signed, the signing CA's certificate does not have to be present, yet alone trusted; it will also happily install apps with an expired signing certificate. If you are coming from a traditional PKI background, this may sound as heresy, but try to keep an open mind and note that Android does not make use of PKI for code signing.

      So what are code signing certificates used for then? Two things: making sure updates for an app are coming from the same author (same origin policy), and establishing trust relationships between applications. Both are implemented by comparing the signing certificate of the currently installed target app with the certificate of the update or related application. Comparison boils down to calling Arrays.equals() on the binary (DER) representation of both certificates. This method naturally knows nothing about CAs or expiration dates. One consequence of this is that once an app (identified by a unique package name) is installed, updates need to use the exact same signing certificates (with one exception, see next section). While multiple signatures on Android apps are not common, if the original application was signed by more than one signer, any updates need to be signed by the same signers, each using its original signing certificate. This means that if your signing certificate(s) expires, you cannot update your app and need to release a new one instead. This would result in not only losing any existing user base or ratings, but more importantly losing access to the legacy app's data and settings (again, there are some exceptions). The solution to this problem is quite simple: don't let your certificate expire. The currently recommended validity period is at least 25 years, and the Google Play Store requires validity until at least October 2033 (Y2K33?). While technically this only amounts to putting off the problem, proper certificate migration support might eventually be added to the platform. Unfortunately, this means that if your signing key is lost or compromised, you are currently out of luck.

      Let's examine the major uses of code signing in Android in detail.

      Application authenticity and identity

      In Android all apps are managed by the system PacakgeManagerService, no matter if they are pre-installed, downloaded from an app market or side loaded. It keeps a database of currently installed apps, including their signing certificate(s), granted permissions and additional metadata in the /data/system/packages.xml file. A typical entry for a user-installed app might look like this:

      <package codepath="/data/app/com.chrome.beta-2.apk" 
      flags="572996" ft="13e20480558"
      installer="com.android.vending"
      it="13ca981cbe3" name="com.chrome.beta"
      nativelibrarypath="/data/app-lib/com.chrome.beta-2"
      userid="10092" ut="13e204816ce" version="1453060">
      <sigs count="1">
      <cert index="8">
      </cert>
      </sigs>
      <perms>
      <item name="android.permission.NFC"/>
      ...
      <item name="com.android.browser.permission.READ_HISTORY_BOOKMARKS"/>
      </perms>
      </package>

      As you can see above, a package entry specifies the package name, the location of the APK and associated libraries, assigned UID and some additional install metadata such as install and update time. This is followed by the number of signatures and the signing certificate as a hexadecimal string. Since a hex-encoded certificate will usually take up around 2K, the actual certificate contents is listed only once. All subsequent packages signed with the same certificate only refer to it by index, as is the case above. The PackageManagerService uses the <cert/> values in packages.xml to decide whether an update is signed with the same certificate as the original app. The certificate is followed by the list of permissions the package has been granted. All of this information is cached on memory (keyed by package name) at runtime for performance reasons.

      Just like user-installed apps, pre-installed apps (usually found in /system/app) can be updated without a full-blown system update, usually via the Play Store or a similar app distribution service. As the /system partition is mounted read-only though, updates are installed in /data, while the original app remains as is. In addition to a <package/> entry, such an app will also have a <updated-package> entry that might look like this:

      <updated-package name="com.google.android.youtube" 
      codePath="/system/app/YouTube.apk"
      ft="13cd6667b50" it="13ae93df638" ut="13cd6667b50"
      version="4216"
      nativeLibraryPath="/data/app-lib/com.google.android.youtube-1"
      userId="10067">
      <perms>
      <item name="android.permission.NFC" />
      ...
      </perms>
      </updated-package>

      The update (in /data/app) inherits the original app's permissions and UID. System apps receive another special treatment as well: if an updated APK is installed over the original one (in /system/app) it is allowed to be signed with a different certificate. The rationale behind this is that if the installer has enough privileges to write to /system, it can be trusted to change the signing certificate as well. The UID, and any files and permissions are retained. Again, there is an exception though: if the package is part of a shared user (discussed in the next section), the signature cannot be updated, because that would affect other apps as well. In the reverse case, when a new system app signed by a different certificate than that of the currently installed non-system app (with the same package name), the non-system app will be deleted first.

      Speaking of system apps, most of those are signed by a number of so called 'platform keys'. There are four different keys in the current AOSP tree, named platform, shared, media and testkey. All packages considered part of the core platform (System UI, Settings, Phone, Bluetooth etc.) are signed with the platform key, launcher and contacts related packages -- with the shared key, the gallery app and media related providers -- with the media key, and everything else (including packages that don't explicitly specify the signing key) -- with the testkey. One thing to note is that the keys distributed with AOSP are in no way special, even though they have 'Google' in the certificate DN. Using them to sign your apps will not give you any specific privileges, you will need the actual keys Google or your carrier/device manufacturer uses. Even though the associated certificates may happen to have the same DN as the ones in AOSP, they are different and very unlikely to be publicly accessible (except maybe for some custom ROMs which may use the AOSP keys as is). Sharing the signing key allows packages to work together and establish trust relationships, which we will discuss next.

      Inter-application trust relationships

      Signature permissions

      As we mentioned above, Android permissions (system or custom) can be declared with the signature protection level. With this level, the permission is only granted if the requesting app is signed by the same signer as the package declaring the permission. This can be thought of as a limited form of mandatory access control (MAC). For custom (app-declared) permission, permissions are declared in the package's AndroidManifest.xml file, and are added to the system when it is installed. Just as other package data, permissions are saved in the /data/system/packages.xml file, as children of the <permissions/> element. Here's how the declaration of a custom permission used by some Google apps looks like:

      <permissions>
      ..
      <item name="com.google.android.googleapps.permission.ACCESS_GOOGLE_PASSWORD"
      package="com.google.android.gsf.login"
      protection="2" />
      ...
      </permissions>

      The entry has the permission name, declaring package and protection level (2 corresponds to signature) as attributes. When installing a package that requests this permission, the PackageManagerService will perform binary comparison (just as when upgrading packages) of its signing certificate against the certificate of the Google Login Service (the declaring package, com.google.android.gsf.login) in order to decide whether to grant the permission. A noteworthy detail is that the system cannot grant a permission it doesn't know about. That is, if app A declares permission 'foo' and app B uses it, app B needs to be installed after app A, otherwise you will get a warning at install time and the permission won't be granted. Since app installation order typically cannot be guaranteed, the usual workaround for this situation is to declare the permission in both apps. Permissions can also be added and removed dynamically using the PackageManger.addPermission() API (know as 'dynamic permissions'). However, packages can only add permissions to a permission tree they define (i.e., you cannot add permissions to another app).

      That mostly explains custom permissions, but what about built-in, system permissions with signature protection level? They work exactly as custom permissions, except that the package that defines them is special. They are defined in the android package, sometimes also referred as 'the framework' or 'the platform'. The core android framework is the set of classes shared by system services, some of them exposed via the public SDK. Those are packaged in JAR files found in /system/framework. Interestingly, those JAR files are not signed: while Android borrows the JAR format to implement code signing, only APK files are signed, not actual JARs. The only APK file in the framework directory is framework-res.apk. As the name implies, it packages framework resources (animation, drawables, layouts, etc.), but no actual code. Most importantly, it defines the android package and system permissions. Thus any app trying to request a system-level signature permission needs to be signed with the same certificate as the framework resource package. Not surprisingly, it is signed by the platform key discussed in the previous section (usually found in build/target/product/security/platform.pk8|.x509.pem). The associated certificate may looks something like this for an AOSP build:

      Version: 3 (0x2)
      Serial Number: 12941516320735154170 (0xb3998086d056cffa)
      Signature Algorithm: md5WithRSAEncryption
      Issuer: C=US, ST=California, L=Mountain View, O=Android, OU=Android,
      CN=Android/emailAddress=android@android.com
      Validity
      Not Before: Apr 15 22:40:50 2008 GMT
      Not After : Sep 1 22:40:50 2035 GMT
      Subject: C=US, ST=California, L=Mountain View, O=Android, OU=Android,
      CN=Android/emailAddress=android@android.com

      Shared user ID

      Android provides an even stronger inter-app trust relationship than using signature permissions:  the ability for different apps to run as the same UID, and optionally in the same process. It is usually referred to as 'shared user ID'. This feature is extensively used by core framework services and system applications, and while the Android team does not recommend that third-party application use it, it is available to user applications as well. It is enabled by adding the android:sharedUserId attribute to AndroidManifest.xml's root element. The 'user ID' specified in the manifest needs to be in Java package format (containing at least one '.') and is used as an identifier, much like package names for applications. If the specified shared UID does not exist it is simply created, but if another package with the same shared UID is already installed, the signing certificates is compared to that of the existing package, and if they do not match, a INSTALL_FAILED_SHARED_USER_INCOMPATIBLE error is returned and installation fails. Adding the sharedUserId to the new version of an already installed app will cause it to change its UID, which would result in losing access to its own files (that was the case in some previous Android versions). Therefore, this is disallowed by the system, and it will reject the update with the INSTALL_FAILED_UID_CHANGED error. In short, if you plan to use shared UID for your apps, you have to design for it from the start, and have them use it since the very first release.

      A shared UID is a first class object in the system's packages.xml and is treated much like apps are: it has associated signing certificate(s) and permissions. Android has 5 built-in shared UIDs, automatically added when the system is bootstrapped:
      • android.uid.system (SYSTEM_UID, 1000)
      • android.uid.phone (PHONE_UID, 1001)
      • android.uid.bluetooth (BLUETOOH_UID, 1002)
      • android.uid.log (LOG_UID, 1007)
      • android.uid.nfc (NFC_UID, 1027)

      Here's how the system shared UID is defined:

      <shared-user name="android.uid.system" userId="1000">
      <sigs count="1">
      <cert index="4" />
      </sigs>
      <perms>
      <item name="android.permission.MASTER_CLEAR" />
      <item name="android.permission.CLEAR_APP_USER_DATA" />
      <item name="android.permission.MODIFY_NETWORK_ACCOUNTING" />
      ...
      <shared-user/>

      As you can see, apart from having a bunch of scary permissions (about 60 on a 4.2 device), the declaration is very similar to the package declarations we showed previously. Conversely, packages that are a part of a shared UID, do not have an associated granted permission list. They inherit the permissions of the shared UID, which are a union of the permissions requested by all currently installed packages with the same shared UID. A side effect of this is, that if a package is part of a shared UID, it can access APIs it hasn't explicitly requested permissions for, as long as some package with the same shared UID has already requested them. Permissions are dynamically removed from the <shared-user/> declaration as packages are installed or uninstalled though, so the set of available permissions is neither guaranteed nor constant. Here's how the declaration of a system app (KeyChain) that runs under a shared ID looks like. It references the shared UID with the sharedUserId attribute and lacks explicit permission declarations:

      <package name="com.android.keychain" 
      codePath="/system/app/KeyChain.apk"
      nativeLibraryPath="/data/app-lib/KeyChain"
      flags="540229" ft="13cd65721a0"
      it="13c2d4721f0" ut="13cd65721a0"
      version="17"
      sharedUserId="1000">
      <sigs count="1">
      <cert index="4" />
      </sigs>
      </package>

      The shared UID is not just a package management construct, it actually maps to a shared Linux UID at runtime as well. Here is an example of two system apps running under the system UID:

      system    5901  9852  845708 40972 ffffffff 00000000 S com.android.settings
      system 6201 9852 824756 22256 ffffffff 00000000 S com.android.keychain

      The ultimate trust level on Android is, of course, running in the same process. Since apps that are part of the same shared UID already have the same Linux UID and can access the same system resources, this is not a problem. It can be requested by specifying the same process name in the process attribute of the <application/> element in the manifest for all apps that need to run in one process. While the obvious result of this is that the apps can share memory and communicate directly instead of using RPC, some system services allow special access to components running in the same process (for example direct access to cached passwords or getting authentication tokens without showing UI prompts). Google apps take advantage of this by requesting to run in the same process as the login service in order to be able to sync data in the background, without user interaction (e.g., Play Services and the Google location service). Naturally, they are signed withe same certificate and part of the com.google.uid.shared shared UID.

      Summary

      Android uses the Java JAR format for code signing, and signatures can be added to both application packages (APKs) and system update packages (OTA updates). While JAR signing is based on X.509 certificates and PKI, Android does not use or validate the signer certificates as such. They are treated as binary blobs and an exact byte match is required in order for the system to consider two packages signed by the same signer. Package signature matching is at the heart of the Android security model, used both to guarantee that package updates come from the same origin and when establishing inter-application trust relationships. Inter-app trust relationships can be created either using signature-level permissions (built-in or custom), or by allowing apps to share the same system UID and, optionally, process. 

      Building a wireless Android device using BeagleBone Black

      $
      0
      0
      Our previousposts were about code signing in Android, and they turned out to be surprisingly relevant with the announcement of the 'master key' code signing Android vulnerability. While details are yet to be formally released, it has been already patched and dissected, so we'll skip that one and try something different for a change. This post is not directly related to Android security, but will discuss some Android implementation details, so it might be of some interest to our regular readers. Without further ado, let's get closer to the metal than usual and build a wireless Android device (almost) from scratch.

      Board introduction -- BeagleBone Black

      For our device we'll use the recently released BeagleBone Black board. So what is a BeagleBone Black (let's call it BBB from now on), then? It's the latest addition to the ranks of ARM-based, single board credit-card-sized computers. It comes with an AM335x 1GHz ARM Cortex-A8 CPU, 512MB RAM,  2GB on-board eMMC flash, Ethernet, HDMI and USB ports, plus a whole lot of I/O pins. Best of all, it's open source hardware, and all schematics and design documents are freely available. It's hard to beat the price of $45 and it looks much, much better than the jagged Raspberry Pi. It comes with Angstrom Linux pre-installed, but can run pretty much any Linux flavour, and of course, Android. It is being used for anything from blinking LEDs to tracking satellites. You can hook it up to circuits you've build or quickly extend it using one of the many 'cape' plug-in boards available. We'll use a couple of those for our project, so 'building' refers mostly to creating an Android build compatible with our hardware. We'll detail the hardware later, but let's first outline some simple requirements for our mobile Android device:
      1. touch screen input
      2. wireless connectivity via WiFi
      3. battery powered
      Here's what we start with:

      Building a kernel for Android

      Android support for AM335x-based devices is provided by the rowboat project. It integrates the required kernel and OS patches and provides build configurations for each of the supported devices, including the BBB. The latest version is based on Android 4.2.2 and if you want to get started quickly, you can download a binary build from TI's Sitara Android development kit page. All you need to do is flash it to an SD card, connect the BBB to an HDMI display and power it on. You will instantly get a fully working, hardware-accelerated Jelly Bean 4.2 device you can control using standard USB keyboard and mouse. If that is all you need, you might as well stop reading here. Our first requirement, however is a working touch screen, not an HDMI monitor, so we have some work to do. As it happens, a number of LCD capes are already available for the BBB (from circuitco and others), so those are our first choice. We opted for the LCD4 4.3" cape which offers almost reasonable resolution and is small enough to be directly attached to the BBB. Unfortunately it doesn't work with the rowboat build from TI. To understand why, let's take a step back and discuss how the BBB supports extension hardware, including capes.

      Linux Device Tree and cape support

      If you look at the expansion header pinout table in the BBB reference manual, you will notice that each pin can serve multiple purposes, depending on configuration. This is called 'pinmuxing' and is the method modern SoC's use to multiplex multiple peripheral functions to a limited set of physical pins. The AM335x CPU the BBB uses is no exception: it has pins with up to 8 possible peripheral functions. So, in order for a cape to work, the SoC needs to be configured to use the correct inputs/outputs for that cape. The situation becomes more complicated when you have multiple capes (up to 4 at a time). BBB capes solve this by using EEPROM that stores enough data to identify the cape, its revision and serial number. At boot time, the kernel identifies the capes by reading their EEPROMs, computes the optimal configuration (or outputs and error if the connected capes are not compatible) and sets the expansion header pinmux accordingly. Initially, this was implemented in a 'board file' in the Linux kernel, and adding a new cape required modifying the kernel and making sure all possible cape configurations were supported. Needless to say, this is not an easy task, and getting it merged into Linux mainline is even harder. Since everyone is building some sort of ARM device nowadays, the number of board files and variations thereof reached critical mass, and Linux kernel maintainers decided to decouple board specific behaviour from the kernel. The mechanism for doing this is called Device Tree (DT) and its goal is to make life easier for both device developers (no need to hack the kernel for each device) and kernel maintainers (no need to merge board-specific patches every other day). A DT is a data structure for describing hardware which is passed to the kernel at boot time. Using the DT, a generic board driver can configure itself dynamically. The BBB ships with a 3.8 kernel and takes full advantage of the new DT architecture. Cape support is naturally implemented using DT source (DTS) files and even goes a step further than mainline Linux by introducing a Cape Manager, an in-kernel mechanism for dynamically loading Device Tree fragments from userspace. This allows for runtime (vs. boot time) loading of capes via sysfs, resource conflict resolution (where possible), manual control over already loaded capes and more.

      Going back to Android, the rowboat Android port is using the 3.2 kernel and relies on manual porting of extension peripheral configuration to the kernel board file. As it happens, support for our LCD4 cape is not there yet. We could try to patch the kernel based on the 3.8 DTS files, or take the plunge and attempt to run Android using 3.8. Since all BBB active development is going on in the 3.8 branch, using the newer version is the better (if more involved) choice.

      Using the 3.8 kernel

      As we know, Android adds a bunch of 'Androidisms' to the Linux kernel, most notably wakelocks, alarm timers, ashmem, binder, low memory killer and 'paranoid' network security. Thus you could not use a vanilla Linux kernel as is to run Android until recently, and a number of Android-specific patches needed to be applied first. Fortunately, thanks to the Android Mainlining Project, most of these features are already merged (in one form or another) in the 3.8 kernel and are available as staging drivers. What this means is that we can take a 3.8 kernel that works well on the BBB and use it run Android. Unfortunately, the BBB can't quite use a vanilla 3.8 kernel yet and requires quite a few patches (including Cape Manager). However, building a 3.8 kernel with all BBB patches applied is not too hard to do, thanks to instructions and build scripts by Robert Nelson. Even better, Andrew Henderson has successfully used it in Android and has detailed the procedure. Following Andrew's build instructions, we can create an Android build that has a good chance of supporting our touch screen. As Andrew's article mentions, hardware acceleration (support for the BBB's PowerVR SGX 530 GPU) is not yet available for the 3.8 kernel, so we need to disable it in our build. One thing that is missing from Andrew's instruction is that you also need to disable building and installing of the SGX drivers, otherwise Android will try to use them at boot and fail to start SurfaceFlinger due to driver-kernel module incompatibility. You can do this by commenting out the dependency on sgx in rowboat's top-level Makefile like this:

      @@ -11,7 +13,7 @@
      CLEAN_RULE = sgx_clean wl12xx_compat_clean kernel_clean clean
      else
      ifeq ($(TARGET_PRODUCT), beagleboneblack)
      -rowboat: sgx
      +#rowboat: sgx
      CLEAN_RULE = sgx_clean kernel_clean clean
      else
      ifeq ($(TARGET_PRODUCT), beaglebone)


      Note that the kernel alone is not enough though: the boot loader (Das U-Boot) needs to be able to load the (flattened) device tree blob, so we need to build a recent version of that as well. Android seems to run OK with this configuration, but there are still a few things that are missing. The first you might notice is ADB support.

      ADB support

      ADB (Android Debug Bridge) is one of the best things to came out of the Android project, and if you have been doing Android development in any form for a while, you probably take it for granted. It is a fairly complex piece of software though, providing support for debugging, file transfer, port forwarding and more and requires kernel support in addition to the Android daemon and client application. In kernel terms this is known as the 'Android USB Gadget Driver', and it is not quite available in the 3.8 kernel, even though there have been multiple attempts at merging it. We can merge the required bits from Google's 3.8 kernel tree, but since we are trying to stay as close as possible to the original BBB 3.8 kernel, we'll use a different approach. While attempts to get ADB in the mainline continue, Function Filesystem (FunctionFS) driver support has been added to Android's ADB and we can use that instead of the 'native' Android gadget. To use ADB with FunctionFS:
      1. Configure FunctionFS support in the kernel (CONFIG_USB_FUNCTIONFS=y):
        Device Drivers -> USB Support -> 
        USB Gadget Support -> USB Gadget Driver -> Function Filesystem
      • Modify the boot parameters in uEnv.txt to set the vendor and product IDs, as well as the device serial number
        • g_ffs.idVendor=0x18d1 g_ffs.idProduct=0x4e26 g_ffs.iSerialNumber=<serial>
        • Setup the FunctionFS directory and mount it in your init.am335xevm.usb.rc file:
          • on fs
            mkdir /dev/usb-ffs 0770 shell shell
            mkdir /dev/usb-ffs/adb 0770 shell shell
            mount functionfs adb /dev/usb-ffs/adb uid=2000,gid=2000
          • Delete all lines referencing /sys/class/android_usb/android0/*. (Those nodes are created by the native Android gadget driver and are not available when using FunctionFS.)
          • Once this is done, you can reboot and you should see your device using adb devices soon after the kernel has loaded. Now you can debug the OS using Eclipse and push and install files directly using ADB. That said, this won't help you at all if the device doesn't boot due to some kernel misconfiguration, so you should definitely get an FTDI cable (the BBB does not have an on-board FTDI chip) to be able to see kernel messages during boot and get an 'emergency' shell when necessary.

            cgroups patch

            If you are running adb logcat in a console and experimenting with the device, you will notice a lot of 'Failed setting process group' warnings like this one:

            W/ActivityManager(  349): Failed setting process group of 4911 to 0
            W/SchedPolicy( 349): add_tid_to_cgroup failed to write '4911' (Permission denied);

            Android's ActivityManager uses Linux control groups (cgroups) to run processes with different priorities (background, foreground, audio, system) by adding them to scheduling groups. In the mainline kernel this is only allowed to processes running as root (EUID=0), but Android changes this behaviour (naturally, with a patch) to only require the CAP_SYS_NICE capability, which allows the ActivityManager (running as system in the system_server process) to add app processes to scheduling groups. To get rid of this warning, you can disable scheduling groups by commenting out the code that sets up /dev/cpuctl/tasks in init.rc, or you can merge the modified functionality form Google's experimental 3.8 branch (which we've been trying to avoid all along...).

            Android hardware support

            Touchscreen

            We now have a functional Android development device running mostly without warnings, so it's time to look closer at requirement #1. As we mentioned, once we disable hardware acceleration, the LCD4 works fine with our 3.8 kernel, but a few things are still missing. The LCD4 comes with 5 directional GPIO buttons which are somewhat useful because scrolling on a resistive touchscreen takes some getting used to, but that is not the only thing they can be used for. We can remap them as Android system buttons (Back, Home, etc) by providing a key layout (.kl) file like this one:

            key 105   BACK               WAKE
            key 106 HOME WAKE
            key 103 MENU WAKE
            key 108 SEARCH WAKE
            key 28 POWER WAKE

            The GPIO keypad on the LCD identifies itself as 'gpio.12' (you can check this using the getevent command), so we need to name the layout file to 'gpio_keys_12.kl'. To achieve this we modify device.mk in the BBB device directory (device/ti/beagleboneblack):

            ...
            # KeyPads
            PRODUCT_COPY_FILES += \
            $(LOCAL_PATH)/gpio-keys.kl:system/usr/keylayout/gpio_keys_12.kl \
            ...

            Now that we are using hardware buttons, we might want to squeeze some more screen real estate from the LCD4 by not showing the system navigation bar. This is done by setting config_showNavigationBar to false in the config.xml framework overlay file for our board:

            <bool name="config_showNavigationBar">false</bool>

            While playing with the screen, we notice that it's a bit dark. Increasing the brightness via the display settings however does not seem to work. A friendly error message in logcat tells us that Android can't open the /sys/class/backlight/pwm-backlight/brightness file. Screen brightness and LEDs are controlled by the lights module on Android, so that's where we look first. There is a a hardware-specific one under the beagleboneblack device directory, but it only supports the LCD3 and LCD7 displays. Adding support for the LCD4 is simply a matter of finding the file that controls brightness under /sys. For the LCD4 it's called /sys/class/backlight/backlight.10/brightness and works exactly like the other LCDs -- you get or set the brightness by reading or writing the backlight intensity level (0-100) as a string. We modify light.c (full source on Github) to first try the LCD4 device and voila -- setting the brightness via the Android UI now works... not. It turns out the brightness file is owned by root and the Settings app doesn't have permission to write to it. We can change this permission in the board's init.am335xevm.rc file:

            # PWM-Backlight for display brightness on LCD4 Cape
            chmod 0666 /sys/class/backlight/backlight.10

            This finally settles it, so we can cross requirement #1 off our list and try to tackle #2 -- wireless support.

            WiFi adapter

            The BBB has an onboard Ethernet port and it is supported out of the box by the rowboat build. If we want to make our new Android device mobile though, we need to add either a WiFi adapter or 3G modem. 3G support is possible, but somewhat more involved, so we will try to enable WiFi first. There are a number of capes that provide WiFi and Bluetooth for the original BeagleBone, but they are not compatible with the BBB, so we will try using a regular WiFi dongle instead. As long as it has a Linux driver, it should be quite easy to wire it to Android by following the TI porting guide, right?

            We'll use a WiFi dongle from LM Technolgies based on the Realtek RTL8188CUS chipset which is supported by the Linux rtl8192cu driver. In addition to the kernel driver, this wireless adapter requires a binary firmware blob, so we need to make sure it's loaded along with the kernel modules. But before getting knee-deep into makefiles, let's briefly review the Android WiFi architecture. Like most hardware support in Android, it consists of a kernel layer (WiFi adapter driver modules), native daemon (wpa_supplicant), HAL (wifi.c in libharware_legacy, communicates with wpa_supplicant via its control socket), a framework service and its public interface (WifiService and WifiManager) and application/UI ('WiFi' screen in the Settings app, as well as SystemUI, responsible for showing the WiFi status bar indicator). That may sound fairly straightforward, but the WifiService implements some pretty complex state transitions in order to manage the underlying native WiFi support. Why is all the complexity needed? Android doesn't load kernel modules automatically, so the WifiStateMachine will try to load kernel modules, find and load any necessary firmware, start the wpa_supplicant daemon, scan for and connect to an AP, obtain an IP address via DHCP, check for and handle captive portals, and finally, if you are lucky, set up the connection and send out a broadcast to notify the rest of the system of the new network configuration. The wpa_supplicant daemon alone can go through 13 different states, so things can get quite involved when those are combined.

            Going step-by-step through the porting guide, we first enable support for our WiFi adapter in the kernel. That results in 6 modules that need to be loaded in order, plus the firmware blob. The HAL (wifi.c) can only load a single module though, so we pre-load all modules in the board's init.am335xevm.rc and set the wlan.driver.status to ok in order to prevent WifiService from trying (and failing) to load the kernel module. We then define the wpa_supplicant and dhcpd services in the init file. Last, but not least, we need to set the wifi.interface property to wlan0, otherwise Android will silently try to use a test device and fail to start the wpa_supplicant. Both properties are set as PRODUCT_PROPERTY_OVERRIDES in device/ti/beagleboneblack/device.mk (see device directory on Github). Here's how the relevant part from init.am335xevm.rc looks like:

            on post-fs-data
            # wifi
            mkdir /data/misc/wifi/sockets 0770 wifi wifi
            insmod /system/lib/modules/rfkill.ko
            insmod /system/lib/modules/cfg80211.ko
            insmod /system/lib/modules/mac80211.ko
            insmod /system/lib/modules/rtlwifi.ko
            insmod /system/lib/modules/rtl8192c-common.ko
            insmod /system/lib/modules/rtl8192cu.ko

            service wpa_supplicant /system/bin/wpa_supplicant \
            -iwlan0 -Dnl80211 -c/data/misc/wifi/wpa_supplicant.conf \
            -e/data/misc/wifi/entropy.bin
            class main
            socket wpa_wlan0 dgram 660 wifi wifi
            disabled
            oneshot

            service dhcpcd_wlan0 /system/bin/dhcpcd -ABKL
            class main
            disabled
            oneshot

            service iprenew_wlan0 /system/bin/dhcpcd -n
            class main
            disabled
            oneshot


            In order to build the wpa_supplicant daemon, we then set BOARD_WPA_SUPPLICANT_DRIVER and WPA_SUPPLICANT_VERSION in device/ti/beagleboneblack/BoardConfig.mk. Note the we are using the generic wpa_supplicant, not the TI-patched one and the WEXT driver instead of the NL80211 one (which requires a proprietary library to be linked in). Since we are preloading driver kernel modules, we don't need to define WIFI_DRIVER_MODULE_PATH and WIFI_DRIVER_MODULE_NAME.

            BOARD_WPA_SUPPLICANT_DRIVER      := WEXT
            WPA_SUPPLICANT_VERSION := VER_0_8_X
            BOARD_WLAN_DEVICE := wlan0

            To make the framework aware of our new WiFi device, we change networkAttributes and radioAttributes in the config.xml overlay file. Getting this wrong will lead to Android's ConnectionManager totally ignoring WiFi even if you manage to connect and will result in the not too helpful 'No network connection' message. "1" here corresponds to the ConnectivityManager.TYPE_WIFI connection type (the built-in Ethernet connection is "9", TYPE_ETHERNET).

            <string-array name="networkAttributes" translatable="false">
            ...
            <item>"wifi,1,1,1,-1,true"</item>
            ...
            </string-array>
            <string-array name="radioAttributes" translatable="false">
            <item>"1,1"</item>
            ...
            </string-array>

            Finally, to make Android aware of our newly found WiFi features, we copy android.hardware.wifi.xml to /etc/permissions/ by adding it to device.mk. This will take care of enabling the Wi-Fi screen in the Settings app:

            PRODUCT_COPY_FILES := \
            ...
            frameworks/native/data/etc/android.hardware.wifi.xml:system/etc/permissions/android.hardware.wifi.xml \
            ...

            After we've rebuild rowboat and updated the root file system, you should be able to turn on WiFi and connect to an AP. Make sure you are using an AC power supply to power the BBB, because the WiFi adapter can draw quite a bit of current and you may not get enough via the USB cable. If the board is not getting enough power, you might experience failure to scan, dropping connections and other weird symptoms even if your configuration is otherwise correct. If WiFi support doesn't work for some reason, check the following:
            • that the kernel module(s) and firmware (if any) is loaded (dmesg, lsmod)
            • logcat output for relevant-lookin error messages
            • that the wpa_supplicant service is defined properly in init.*.rc and the daemon is started
            • that /data/misc/wifi and wpa_supplicant.conf are available and have the right owner and permissions (wifi:wifi and 0660)
            • that the wifi.interface and wlan.driver.status properties are set correctly
            • use your debugger if all else fails
            That was easy, right? We now have a working wireless connection, it's time to think about requirement #3, powering the device.

            Battery power

            The BBB can be powered in three ways: via the miniUSB port, via the 5V AC adapter jack, and by using the power rail (VDD_5V) on the board directly. We can use any USB battery pack that provides enough current (~1A) and has enough capacity to keep the device going by simply connecting it to the miniUSB port. Those can be rather bulky and you will need an extra cable, so let's look for other options. As can be expected, there is a cape for that. The aptly named Battery Cape plugs into the BBB's expansion connectors and provides power directly to the power rail. We can plug the LCD4 on top of it and get an integrated (if a bit bulky) battery-powered touchscreen device. The Battery Cape holds 4 AA batteries connected as two sets in parallel. It is not simply a glorified battery holder though -- it has a boost converter that can provide stable 1A current at 5V even if battery voltage fluctuates (1.8-5.5V). It does provide support for monitoring battery voltage via AIN4 input, but does not have a 'fuel gauge' chip so we can't display battery level in Android without adding additional circuitry. That is ways our mobile device cannot display the battery level (yet) and unfortunately won't be able to shut itself down when battery levels become critically low. That is something that definitely needs work, but for now we make the device always believe it's at 100% power by setting the hw.nobattery property to true. The alternative is to have it display the 'low battery' red warning icon all the time, so this approach is somewhat preferable. Four 1900 mAh batteries installed in the battery cape should provide enough power to run the device for a few hours even when using WiFi, so we can (tentatively) mark requirement #3 as fulfilled.

            Flashing the device

            If you have been following Andrew Henderson's build guide linked above, you have been 'installing' Android on an SD card and booting the BBB from it. This works fine and makes it easy to fix things when Android won't load by simply mounting the SD card on your PC and editing or copying the necessary files. However, most consumer grade SD cards don't offer the best performance and can be quite unreliable. As we mentioned at the beginning of the post, the BBB comes with 2GB of built-in eMMC, which is enough to install Android and have some space left for a data partition. On most Android devices flashing can be performed by either booting into the recovery system or by using the fastboot tool over USB. The rowboat build does not have a recovery image, and while fastboot is supported by TI's fork of U-Boot, the version we are using to load the DT blob does not support fastboot yet. That leaves booting another OS in lieu of a recovery and flashing the eMMC form there, either manually or by using an automated flasher image. The flasher image simply runs a script at startup, so let's see how it works by doing it manually first. The latest BBB Angstrom bootable image (not the flasher one) is a good choice for our 'recovery' OS, because it is known to work on the BBB and has all the needed tools (fdisk, mkfs.ext4, etc.). After you dd it to an SD card, mount the card on your PC and copy the Android boot files and rootfs archive to an android/ directory. You can then boot from the SD card, get a root shell on the Angstrom and install Android to the eMMC from there.

            Android devices typically have a boot, system and userdata parition, as well as a recovery one and optionally others. The boot partition contains the kernel and a ramdisk which gets mounted at the root of the device filesystem. system contains the actual OS files and gets mounted read-only at /system, while userdata is mounted read-write at /data and stores system and app data, as well user-installed apps. The partition layout used by the BBB is slightly different. The board ootloader will look for the first stage bootloader (SPL, named MLO in U-Boot) on the first FAT partition of the eMMC. It in turn will load the second state bootloader (u-boot.img) which will then search for a OS image according to its configuration. On embedded devices U-Boot configuration is typically stored as a set of variables in NAND, replaced by the uEnv.txt file on devices without NAND such as the BBB. Thus we need a FAT boot partition to host the SPL, u-boot.img, uEnv.txt, the kernel image and the DT blob. system and userdata will be formatted as EXT4 and will work as in typical Android devices.

            The default Angstrom installations creates only two partitions -- a DOS one for booting, and a Linux one that hosts Angstrom Linux. To prepare the eMMC for Android, you need to delete the Linux partition and create two new Linux partitions in its place -- one to hold Android system files and one for user data. If you don't plan to install too many apps, you can simply make them equal sized. When booting from the SD card, the eMMC device will be /dev/block/mmcblk1, with the first partition being /dev/block/mmcblk1p1, the second /dev/block/mmcblk1p2 and so on. After creating those 3 partitions with fdisk we format them with their respective filesystems:

            # mkfs.vfat -F 32 -n boot /dev/block/mmcblk1p1
            # mkfs.ext4 -L rootfs /dev/block/mmcblk1p2
            # mkfs.ext4 -L usrdata /dev/block/mmcblk1p3

            Next, we mount boot and copy boot related files, then mount rootfs and untar the rootfs.tar.bz2 archive. usrdata can be left empty, it will be populated on first boot.

            # mkdir -p /mnt/1/
            # mkdir -p /mnt/2/
            # mount -t vfat /dev/block/mmcblk1p1 /mnt/1
            # mount -t ext4 /dev/block/mmcblk1p2 /mnt/2
            # cp MLO u-boot.img zImage uEnv.txt am335x-boneblack.dtb /mnt/1/
            # tar jxvf rootfs.tar.bz2 -C /mnt/2/
            # umount /mnt/1
            # umount /mnt/2

            With this, Android is installed on the eMMC and you can shutdown the 'recovery' OS, remove the SD card and boot from the eMMC. Note that the U-Boot used has been patched to probe whether the SD card is available and will automatically boot from it (without you needing to hold the BBB's user boot button), so if you don't remove the 'recovery' SD card, it will boot again.

            We now have a working, touch screen Android device with wireless connectivity. Here's how it looks in action:

            Our device is unlikely to win any design awards or replace your Nexus 7, but it could be used as the basis of  dedicated Android devices, such as a wireless POS terminal or a SIP phone and extended even further by adding more capes or custom hardware as needed.

            Summary

            The BBB is fully capable of running Android and by adding off-the shelf peripherals you can easily turn it into a 'tablet' (of sorts) by adding a touch screen and wireless connectivity. While the required software is mostly available in the rowboat project, if you want to have the best hardware support you need to use BBB's native 3.8 kernel and configure Android to use it. Making hardware fully available to the Android OS is mostly a matter of configuring the relevant HAL bits properly, but that is not always straightforward, even with board vendor provided documentation. The reason for this is that Android subsystems are not particularly cohesive -- you need to modify multiple, sometimes seemingly unrelated, files at different locations to get a single subsystem working. This is, of course, not specific to Android and is the price to pay for building a system by integrating originally unrelated OSS projects. On the positive side, most components can be replaced and the required changes can usually be confined to the (sometimes loosely defined) Hardware Abstraction Layer (HAL). 

            Credential storage enhancements in Android 4.3

            $
            0
            0
            Our previous post was not related to Android security, but happened to coincide with the Android 4.3 announcement. Now that the post-release dust has settled, time to give it a proper welcome here as well. Being a minor update, there is nothing ground-breaking, but this 'revenge of the beans' brings some welcome enhancements and new APIs. Enough of those are related to security for some to even call 4.3 a 'security release'. Of course, the big star is SELinux, but credential storage, which has been a somewhatrecurringtopic on this blog, got a significant facelift too, so we'll look into it first. This post will focus mainly on the newly introduced features and interfaces, so you might want to review previous credential storage posts before continuing.

            What's new in 4.3

            First and foremost, the system credential store, now officially named 'Android Key Store' has a public API for storing and using app-private keys. This was possible before too, but not officially supported and somewhat clunky on pre-ICS devices. Next, while only the primary (owner) user could use the system key store pre-4.3, now it is multi-user compatible and each user gets their own keys. Finally, there is an API and even a system settings field that lets you check whether the credential store is hardware-backed (Nexus 4, Nexus 7) or software only (Galaxy Nexus). While the core functionality hasn't changed much since the previous release, the implementation strategy has evolved quite a bit, so we will look briefly into that too. That's a lot to cover, so lets' get started.

            Public API

            The API is outlined in the 'Security' section of the 4.3 new API introduction page, and details can be found in the official SDK reference, so we will only review it briefly. Instead of introducing yet another Android-specific API, key store access is exposed via standard JCE APIs, namely KeyGenerator and KeyStore. Both are backed by a new Android JCE provider, AndroidKeyStoreProvider and are accessed by passing "AndroidKeyStore" as the type parameter of the respective factory methods (those APIs were actually available in 4.2 as well, but were not public). For a full sample detailing their usage, refer to the BasicAndroidKeyStore project in the Android SDK. To introduce their usage briefly, first you create a KeyPairGeneratorSpec that describes the keys you want to generate (including a self-signed certificate), initialize a KeyPairGenerator with it and then generate the keys by calling generateKeyPair(). The most important parameter is the alias, which you then pass to KeyStore.getEntry() in order to get a handle to the generated keys later. There is currently no way to specify key size or type and generated keys default to 2048 bit RSA. Here's how all this looks like:

            // generate a key pair
            Context ctx = getContext();
            Calendar notBefore = Calendar.getInstance()
            Calendar notAfter = Calendar.getInstance();
            notAfter.add(1, Calendar.YEAR);
            KeyPairGeneratorSpec spec = new KeyPairGeneratorSpec.Builder(ctx)
            .setAlias("key1")
            .setSubject(
            new X500Principal(String.format("CN=%s, OU=%s", alais,
            ctx.getPackageName())))
            .setSerialNumber(BigInteger.ONE).setStartDate(notBefore.getTime())
            .setEndDate(notAfter.getTime()).build();

            KeyPairGenerator kpGenerator = KeyPairGenerator.getInstance("RSA", "AndroidKeyStore");
            kpGenerator.initialize(spec);
            KeyPair kp = kpGenerator.generateKeyPair();

            // in another part of the app, access the keys
            KeyStore keyStore = KeyStore.getInstance("AndroidKeyStore");
            keyStore.load(null);
            KeyStore.PrivateKeyEntry keyEntry = (KeyStore.PrivateKeyEntry)keyStore.getEntry("key1", null);
            RSAPublicKey pubKey = (RSAPublicKey)keyEntry.getCertificate().getPublicKey();
            RSAPrivateKey privKey = (RSAPrivateKey) keyEntry.getPrivateKey();

            If the device has a hardware-backed key store implementation, keys will be generated outside of the Android OS and won't be directly accessible even to the system (or root user). If the implementation is software only, keys will be encrypted with a per-user key-encryption master key. We'll discuss key protection in detail later.

            Android 4.3 implementation

            This hardware-backed design was initially implemented in the original Jelly Bean release (4.1), so what's new here? Credential storage has traditionally (since the Donut days), been implemented as a native keystore daemon that used a local socket as its IPC interface. The daemon has finally been retired and replaced with a 'real' Binder service, which implements the IKeyStoreService interface. What's interesting here is that the service is implemented in C++, which is somewhat rare in Android. See the interface definition for details, but compared to the original keymaster-based implementation, IKeyStoreService gets 4 new operations: getmtime(), duplicate(), is_hardware_backed() and clear_uid(). As expected, getmtime() returns the key modification time and duplicate() copies a key blob (used internally for key migration). is_hardware_backed will query the underlying keymaster implementation and return true when it is hardware-backed. The last new operation, clear_uid(), is a bit more interesting. As we mentioned, the key store now supports multi-user devices and each user gets their own set of keys, stored in /data/misc/keystore/user_N, where N is the Android user ID. Keys names (aliases) are mapped to filenames as before, and the owner app UID now reflects the Android user ID as well. When an app that owns key store-managed keys is uninstalled for a user, only keys created by that user are deleted. If an app is completely removed from the system, its keys are deleted for all users. Since key access is tied to the app UID, this prevents a different app that happens to get the same UID from accessing an uninstalled app's keys. Key store reset, which deletes both key files and the master key, also affects only the current user. Here's how key files for the primary user might look like:

            1000_CACERT_ca
            1000_CACERT_cacert
            10248_USRCERT_myKey
            10248_USRPKEY_myKey
            10293_USRCERT_rsa_key0
            10293_USRPKEY_rsa_key0

            The actual files are owned by the keystore service (which runs as the keystore Linux user) and it checks the calling UID to decide whether to grant or deny access to a key file, just as before. If the keys are protected by hardware, key files may contain only a reference to the actual key and deleting them may not destroy the underlying keys. Therefore, the del_key() operation is optional and may not be implemented.

            The hardware in 'hardware-backed'

            To give some perspective to the whole 'hardware-backed' idea, let's briefly discuss how it is implemented on the Nexus 4. As you may now, the Nexus 4 is based on Qualcomm's Snapdragon S4 Pro APQ8064 SoC. Like most recent ARM SoC's it is TrustZone-enabled and Qualcomm implement their Secure Execution Environment (QSEE) on top of it. Details are, as usual, quite scarce, but trusted application are separated from the main OS and the only way to interact with them is through the controlled interface the /dev/qseecom device provides. Android applications that wish to interact with the QSEE load the proprietary libQSEEComAPI.so library and use the functions it provides to send 'commands' to the QSEE. As with most other SEEs, the QSEECom communication API is quite low-level and basically only allows for exchanging binary blobs (typically commands and replies), whose contents entirely depends on the secure app you are communicating with. In the case of the Nexus 4 keymaster, the used commands are: GENERATE_KEYPAIR, IMPORT_KEYPAIR, SIGN_DATA and VERIFY_DATA. The keymaster implementation merely creates command structures, sends them via the QSEECom API and parses the replies. It does not contain any cryptographic code itself.

            An interesting detail is that, the QSEE keystore trusted app (which may not be a dedicated app, but part of more general purpose trusted application) doesn't return simple references to protected keys, but instead uses proprietary encrypted key blobs (not unlike nCipher Thales HSMs). In this model, the only thing that is actually protected by hardware is some form of 'master' key-encryption key (KEK), and user-generated keys are only indirectly protected by being encrypted with the KEK. This allows for practically unlimited number of protected keys, but has the disadvantage that if the KEK is compromised, all externally stored key blobs are compromised as well (of course, the actual implementation might generate a dedicated KEK for each key blob created or the key can be fused in hardware; either way no details are available). Qualcomm keymaster key blobs are defined in AOSP code as shown below. This suggest that private exponents are encrypted using AES, most probably in CBC mode, with an added HMAC-SHA256 to check encrypted data integrity. Those might be further encrypted with the Android key store master key when stored on disk.

            #define KM_MAGIC_NUM     (0x4B4D4B42)    /* "KMKB" Key Master Key Blob in hex */
            #define KM_KEY_SIZE_MAX (512) /* 4096 bits */
            #define KM_IV_LENGTH (16) /* AES128 CBC IV */
            #define KM_HMAC_LENGTH (32) /* SHA2 will be used for HMAC */

            struct qcom_km_key_blob {
            uint32_t magic_num;
            uint32_t version_num;
            uint8_t modulus[KM_KEY_SIZE_MAX];
            uint32_t modulus_size;
            uint8_t public_exponent[KM_KEY_SIZE_MAX];
            uint32_t public_exponent_size;
            uint8_t iv[KM_IV_LENGTH];
            uint8_t encrypted_private_exponent[KM_KEY_SIZE_MAX];
            uint32_t encrypted_private_exponent_size;
            uint8_t hmac[KM_HMAC_LENGTH];
            };

            So, in the case of the Nexus 4, the 'hardware' is simply the ARM SoC. Are other implementations possible? Theoretically, a hardware-backed keymaster implementation does not need to be based on TrustZone. Any dedicated device that can generate and store keys securely can be used, the usual suspects being embedded secure elements (SE) and TPMs. However, there are no mainstream Android devices with dedicated TPMs and recent flagship devices have began shipping without embedded SEs, most probably due to carrier pressure (price is hardly a factor, since embedded SEs are usually in the same package as the NFC controller). Of course, all mobile devices have some form of UICC (SIM card), which typically can generate and store keys, so why not use that? Well, Android still doesn't have a standard API to access the UICC, even though 'vendor' firmwares often include one. So while one could theoretically implement a UICC-based keymaster module compatible with the UICC's of your friendly neighbourhood MNO, it is not very likely to happen.

            Security level

            So how secure are you brand new hardware-backed keys? The answer is, as usual, it depends. If they are stored in a real, dedicated, tamper-resistant hardware module, such as an embedded SE, they are as secure as the SE. And since this technology has been around for over 40 years, and even recent attacks are only effective against SEs using weak encryption algorithms, that means fairly secure. Of course, as we mentioned in the previous section, there are no current keymaster implementations that use actual SEs, but we can only hope.

            What about TrustZone? It is being aggressively marketed as a mobile security 'silver bullet' and streaming media companies have embraced it as an 'end-to-end' DRM solution, but does it really deliver? While the ARM TrustZone architecture might be sound at its core, in the end trusted applications are just software that runs at a slightly lower level than Android. As such, they can be readily reverse engineered, and of course vulnerabilities have been found. And since they run within the Secure World they can effectively access everything on the device, including other trusted applications. When exploited, this could lead to very effective and hard to discover rootkits. To sum this up, while TrustZone secure applications might provide effective protection against Android malware running on the device, given physical access, they, as well as the TrustZone kernel, are exploitable themselves. Applied to the Android key store, this means that if there is an exploitable vulnerability in any of the underlying trusted applications the keymaster module depends on, key-encryption keys could be extracted and 'hardware-backed' keys could be compromised.

            Advanced usage

            As we mentioned in the first section, Android 4.3 offers a well defined public API to the system key store. It should be sufficient for most use cases, but if needed you can connect to the keystore service directly (as always, not really recommended). Because it is not part of the Android SDK, the IKeyStoreService doesn't have wrapper 'Manager' class, so if you want to get a handle to it, you need to get one directly from the ServiceManager. That too is hidden from SDK apps, but, as usual, you can use reflection. From there, it's just a matter of calling the interface methods you need (see sample project on Github). Of course, if the calling UID doesn't have the necessary permission, access will be denied, but most operations are available to all apps.

            Class smClass = Class.forName("android.os.ServiceManager");
            Method getService = smClass.getMethod("getService", String.class);
            IBinder binder = (IBinder) getService.invoke(null, "android.security.keystore");
            IKeystoreService keystore = IKeystoreService.Stub.asInterface(binder);

            By using the IKeyStoreService directly you can store symmetric keys or other secret data in the system key store by using the put() method, which the current java.security.KeyStore implementation does not allow (it can only store PrivateKey's). Such data is only encrypted by the key store master key, and even the system key store is hardware-backed, data is not protected by hardware in any way.

            Accessing hidden services is not the only way to augment the system key store functionality. Since the sign() operation implements a 'raw' signature operation (RSASP1 in RFC 3447), key store-managed (including hardware-backed) keys can be used to implement signature algorithms not natively supported by Android. You don't need to use the IKeyStoreService interface, because this operation is available through the standard JCE Cipher interface:

            KeyStore ks = KeyStore.getInstance("AndroidKeyStore");
            ks.load(null);
            KeyStore.Entry keyEntry = keyStore.getEntry("key1", null);
            RSAPrivteKey privKey = (RSAPrivateKey) keyEntry.getPrivateKey();

            Cipher c = Cipher.getInstance("RSA/ECB/NoPadding");
            cipher.init(Cipher.ENCRYPT_MODE, i privateKey);
            byte[] result = cipher.doFinal(in, o, in.length);

            If you use this primitive to implement, for example, Bouncy Castle's AsymmetricBlockCipher interface, you can use any signature algorithm available in the Bouncy Castle lightweight API (we actually use Spongy Castle to stay compatible with Android 2.x without too much hastle). For example, if you want to use a more modern (and provably secure) signature algorithm than Android's default PKCS#1.5 implementation, such as RSA-PSS you can accomplish it with something like this (see sample project for AndroidRsaEngine):

            AndroidRsaEngine rsa = new AndroidRsaEngine("key1", true);

            Digest digest = new SHA512Digest();
            Digest mgf1digest = new SHA512Digest();
            PSSSigner signer = new PSSSigner(rsa, digest, mgf1digest, 512 / 8);
            RSAKeyParameters params = new RSAKeyParameters(false,
            pubKey.getModulus(), pubKey.getPublicExponent());

            signer.init(true, params);
            signer.update(signedData, 0, signedData.length);
            byte[] signature = signer.generateSignature();

            Likewise, if you need to implement RSA key exchange, you can easily make use of OAEP padding like this:

            AndroidRsaEngine rsa = new AndroidRsaEngine("key1", false);

            Digest digest = new SHA512Digest();
            Digest mgf1digest = new SHA512Digest();
            OAEPEncoding oaep = new OAEPEncoding(rsa, digest, mgf1digest, null);

            oaep.init(true, null);
            byte[] cipherText = oaep.processBlock(plainBytes, 0, plainBytes.length);

            The sample application shows how to tie all of those APIs together and features an elegant and fully Holo-compatible user interface:



            An added benefit of using hardware-backed keys is that, since they are not generated using Android's default SecureRandom implementation, they should not be affected by the recently announced SecureRandom vulnerability (of course, since the implementation is closed, we can only hope that trusted apps' RNG actually works...). However, Bouncy Castle's PSS and OAEP implementations do use SecureRandom internally, so you might want to seed the PRNG 'manually' before starting your app to make sure it doesn't start with the same PRNG state as other apps. The keystore daemon/service uses /dev/urandom directly as a source of randomness, when generating master keys used for key file encryption, so they should not be affected. RSA keys generated by the softkeymaster OpenSSL-based software implementation might be affected, because OpenSSL uses RAND_bytes() to generate primes, but are probably OK since the keystore daemon/service runs in a dedicated process and the OpenSSL PRNG automatically seeds itself from /dev/urandom on first access (unfortunately there are no official details about the 'insecure SecureRandom' problem, so we can't be certain).

            Summary

            Android 4.3 offers a standard SDK API for generating and accessing app-private RSA keys, which makes it easier for non-system apps to store their keys securely, without implementing key protection themselves. The new Jelly Bean also offers hardware-backed key storage on supported devices, which guarantees that even system or root apps cannot extract the keys. Protection against physical access attacks depends on the implementation, with most (all?) current implementations being TrustZone-based. Low-level RSA operations with key store managed keys are also possible, which enables apps to use cryptographic algorithms not provided by Android's built-in JCE providers.

            Using the SIM card as a secure element in Android

            $
            0
            0
            Our last post introduced one of Android 4.3's more notable security features -- improved credential storage, and while there are a few other enhancements worth discussing, this post will slightly change direction. As mentioned previously, mobile devices can include some form of a Secure Element (SE), but a smart card based UICC (usually called just 'SIM card') is almost universally present. Virtually all SIM cards in use today are programmable and thus can be used as a SE. Continuing the topic of hardware-backed security, we will now look into how SIMs can be programmed and used to enhance the security of Android applications.

            SIM cards

            First, a few words about terminology: while the correct term for modern mobile devices is UICC (Universal Integrated Circuit Card), since the goal of this post is not to discuss the differences between mobile networks, we will usually call it a 'SIM card' and only make the distinction when necessary. 

            So what is a SIM card? 'SIM' stands for Subscriber Identity Module and refers to a smart card that securely stores the subscriber identifier and the associated key used to identify and authenticate to a mobile network. It was originally used on GSM networks and standards were later extended to support 3G and LTE. Since SIMs are smart cards, they conform to ISO-7816 standards regarding physical characteristics and electrical interface. Originally they were the same size as 'regular' smart cards (Full-size, FF), but by far the most popular sizes nowadays are Mini-SIM (2FF) and Micro-SIM (3FF), with Nano-SIM (4FF) introduced in 2012. 

            Of course, not every smart that fits in the SIM slot can be used in a mobile device, so the next question is: what makes a smart card a SIM card? Technically, it's conformance to mobile communication standards such 3GPP TS 11.11 and certification by the SIMalliance. In practice it is the ability to run an application that allows it to communicate with the phone (referred to as 'Mobile Equipment', ME, or 'Mobile Station', MS in related standards) and connect to a mobile network. While the original GSM standard did not make a  distinction between the physical smart card and the software required to connect to the mobile network, with the introduction of 3G standards, a clear distinction has been made. The physical smart card is referred to as Universal Integrated Circuit Card (UICC) and different mobile network applications than run on it have been defined: GSM, CSIM, USIM, ISIM, etc. A UICC can host and run more than one network application (hence 'universal'), and thus can be used to connect to different networks. While network application functionality depends on the specific mobile network, their core features are quite similar: store network parameters securely and identify to the network, as well as authenticate the user (optionally) and store user data. 

            SIM card applications

            Let's take GSM/3G as an example and briefly review how a network application works. For GSM the main network parameters are network identity (International Mobile Subscriber Identity, IMSI; tied to the SIM), phone number  (MSISDN, used for routing calls and changeable) and a shared network authentication key Ki. To connect to the network the MS needs to authenticate itself and negotiate a session key. Both authentication and session key derivation make use of Ki, which is also known to the network and looked up by IMSI. The MS sends a connection request and includes its IMSI, which the network uses to find the corresponding Ki. The network then uses the Ki to generate a challenge (RAND), expected challenge response (SRES) and session key Kc and sends RAND to the MS. Here's where the GSM application running on the SIM card comes into play: the MS passes the RAND to the SIM card, which in turn generates its own SRES and Kc. The SRES is sent to the network and if it matches the expected value, encrypted communication is established using the session key Kc. As you can see, the security of this protocol hinges solely on the secrecy of the Ki. Since all operations involving the Ki are implemented inside the SIM and it never comes with direct contact with neither the MS or the network, the scheme is kept reasonably secure. Of course, security depends on the encryption algorithms used as well, and major weaknesses that allow intercepted GSM calls to be decrypted using off-the shelf hardware were found in the original versions of the A3/A5 algorithms (which were initially secret). Jumping back to Android for a moment, all of this is implemented by the baseband software (more on this later) and network authentication is never directly visible to the main OS.

            We've shown that SIM cards need to run applications, let's now say a few words about how those applications are implemented and installed. Initial smart cards were based on a file system model, where files (elementary files, EF) and directories (dedicated files, DF) were named with a two-byte identifier. Thus developing 'an application' consisted mostly of selecting an ID for the DF that hosts its files (called ADF), and specifying the formats and names of EFs that store data. For example, the GSM application is under the '7F20' ADF, and the USIM ADF hosts the EF_imsi, EF_keys, EF_sms, etc. files. Practically all SIMs used today are based on Java Card technology and implement GlobalPlatform card specifications. Thus all network applications are implemented as Java Card applets and emulate the legacy file-based structure for backward compatibility. Applets are installed according to GlobalPlatform specifications by authenticating to the Issuer Security Domain (Card Manager) and issuing LOAD and INSTALL commands.

            One application management feature specific to SIM cards is support for OTA (Over-The-Air) updates via binary SMS. This functionality is not used by all carries, but it allows them to remotely install applets on SIM cards they have issued. OTA is implemented by wrapping card commands (APDUs) in SMS T-PDUs, which the ME forwards to the SIM (ETSI TS 102 226). In most SIMs this is actually the only way to load applets on the card, even during initial personalization. That is why most of the common GlobalPlatform-compliant tools cannot be used as is for managing SIMs. One needs to either use a tool that supports SIM OTA, such as the SIMalliance Loader, or implement APDU wrapping/unwrapping, including any necessary encryption and integrity algorithms (ETSI TS 102 225). Incidentally, problems with the implementation of those secured packets on some SIMs that use DES as the encryption and integrity algorithm have been used to crack OTA update keys. The major use of the OTA functionality is to install and maintain SIM Toolkit (STK) applications which can interact with the handset via standard 'proactive' (in reality implemented via polling) commands and display menus or even open Web pages and send SMS. While STK applications are almost unheard of in the US and Asia, they are still heavily used in some parts of Europe and Africa for anything from mobile banking to citizen authentication. Android also supports STK with a dedicated STK system app, which is automatically disabled if the SIM card has not STK applets installed.

            Accessing the SIM card

            As mentioned above, network related functionality is implemented by the baseband software and what can be done from Android is entirely dependent on what features the baseband exposes. Android supports STK applications, so it does have internal support for communicating to the SIM, but the OS security overview explicitly states that 'low level access to the SIM card is not available to third-party apps'. So how can we use it as an SE then? Some Android builds from major vendors, most notably Samsung, provide an implementation of the SIMalliance Open Mobile API on some handsets and an open source implementation (for compatible devices) is available from the SEEK for Android project. The Open Mobile API aims to provide a unified interface for accessing SEs on Android, including the SIM. To understand how the Open Mobile API works and the cause of its limitations, let's first review how access to the SIM card is implemented in Android.

            On Android devices all mobile network functionality (dialing, sending SMS, etc.) is provided by the baseband processor (also referred to as 'modem' or 'radio'). Android applications and system services communicate to the baseband only indirectly via the Radio Interface Layer (RIL) daemon (rild). It in turn talks to the actual hardware by using a manufacturer-provided RIL HAL library, which wraps the proprietary interface the baseband provides. The SIM card is typically connected only to baseband processor (sometimes also to the NFC controller via SWP), and thus all communication needs to go through the RIL. While the proprietary RIL implementation can always access the SIM in order to perform network identification and authentication, as well as read/write contacts and access STK applications, support for transparent APDU exchange is not always available. The standard way to provide this feature is to use extended AT commands such AT+CSIM (Generic SIM access) and AT+CGLA (Generic UICC Logical Channel Access), as defined in 3GPP TS 27.007, but some vendors implement it using proprietary extensions, so support for the necessary AT commands does not automatically provide SIM access.

            SEEK for Android provides patches that implement a resource manager service (SmartCardService) that can connect to any supported SE (embedded SE, ASSD or UICC) and extensions to the Android telephony framework that allow for transparent APDU exchange with the SIM. As mentioned above, access through the RIL is hardware and proprietary RIL library dependent, so you need both a compatible device and a build that includes the SmartCardService and related framework extensions. Thanks to some work by they u'smile project, UICC access on most variants of the popular Galaxy S2 and S3 handsets is available using a patched CyanogenMod build, so you can make use of the latest SEEK version. Even if you don't own one of those devices, you can use the SEEK emulator extension which lets you use a standard PC/SC smart card reader to connect a SIM to the Android emulator. Note that just any regular Java card won't work out of the box because the emulator will look for the GSM application and mark the card as not usable if it doesn't find one. You can modify it to skip those steps, but a simple solution is to install a dummy GSM application that always returns the expected responses.

            Once you have managed to get a device or the emulator to talk to the SIM, using the OpenMobile API to send commands is quite straightforward:

            // connect to the SE service, asynchronous
            SEService seService = new SEService(this, this);
            // list readers
            Reader[] readers = seService.getReaders();
            // assume the first one is SIM and open session
            Session session = readers[0].openSession();
            // open logical (or basic) channel
            Channel channel = session.openLogicalChannel(aid);
            // send APDU and get response
            byte[] rapdu = channel.transmit(cmd);

            You will need to request the org.simalliance.openmobileapi.SMARTCARD permission and add the org.simalliance.openmobileapi extension library to your manifest for this to work. See the official wiki for more details.

            <manifest ...>

            <uses-permission android:name="org.simalliance.openmobileapi.SMARTCARD" />

            <application ...>
            <uses-library
            android:name="org.simalliance.openmobileapi"
            android:required="true" />
            ...
            </application>
            </manifest>

            SE-enabled Android applications

            Now that we can connect to the SIM card from applications, what can we use it for? Just as regular smart cards, an SE can be used to store data and keys securely and perform cryptographic operations without keys having to leave the card. One of the usual applications of smart cards is to store RSA authentication keys and certificates that are used from anything from desktop logon to VPN or SSL authentication. This is typically implemented by providing some sort of middleware library, usually a standard cryptographic service provider (CSP) module that can plug into the system CSP or be loaded by a compatible application. As the Android security model does not allow system extensions provided by third party apps, in order to integrate with the system key management service, such middleware would need to be implemented as a keymaster module for the system credential store (keystore) and be bundled as a system library. This can be accomplished by building a custom ROM which installs our custom keymaster module, but we can also take advantage of the SE without rebuilding the whole system. The most straightforward way to do this is to implement the security critical part of an app inside the SE and have the app act as a client that only provides a user-facing GUI. One such application provided with the SEEK distribution is an SE-backed one-time password (OTP) Google Authenticator app. Since the critical part of OTP generators is the seed (usually a symmetric cryptographic key), they can easily be cloned once the seed is obtained or extracted. Thus OTP apps that store the seed in a regular file (like the official Google Authenticator app) provide little protection if the device OS is compromised. The SEEK GoogleOtpAuthenticator app both stores the seed and performs OTP generation inside the SE, making it impossible to recover the seed from the app data stored on the device.

            Another type of popular application that could benefit from using an SE is a password manager. Password managers typically use a user-supplied passphrase to derive a symmetric key, which is in turn used to encrypt stored passwords. This makes it hard to recover stored passwords without knowing the passphrase, but naturally security level is totally dependent on its complexity. As usual, because typing a long string with rarely used characters on a mobile device is not a particularly pleasant experience, users tend to pick easier to type, low-entropy passphrases. If the key is stored in an SE, the passphrase can be skipped or replaced with a simpler PIN, making the password manager app both more user-friendly and secure. Let's see how such an SE-backed password manager can be implemented using a Java Card applet and the Open Mobile API.

            DIY SIM password manager

            Ideally, all key management and encryption logic should be implemented inside the SE and the client application would only provide input (plain text passwords) and retrieve opaque encrypted data. The SE applet should not only provide encryption, but also guarantee the integrity of encrypted data either by using an algorithm that provides authenticated encryption (which most smart card don't natively support currently) or by calculating a MAC over the encrypted data using HMAC or some similar mechanism. Smart cards typically provide some sort of encryption support, starting with DES/3DES for low-end models and going up to RSA and EC for top-of-the-line ones. Since public key cryptography is typically not needed for mobile network authentication or secure OTA (which is based on symmetric algorithms), SIM cards rarely support RSA or EC. A reasonably secure symmetric and hash algorithm should be enough to implement a simple password manager though, so in theory we should be able to use even a lower-end SIM.

            As mentioned in the previous section, all recent SIM cards are based on Java Card technology, and it is possible to develop and load a custom applet, provided one has access to the Card Manager or OTA keys. Those are naturally not available for commercial MNO SIMs, so we would need to use a blank 'programmable' SIM that allows for loading applets without authentication or comes bundled with the required keys. Those are quite hard, but not impossible to come by, so let's see how such a password manager applet could be implemented. We won't discuss the basics of Java Card programming, but jump straight to the implementation. Refer to the offical documentation, or a tutorial if you need an introduction.

            The Java Card API provides a subset of the JCA classes, with an interface optimized towards using pre-allocated, shared byte arrays, which is typical on a memory constrained platform such as a smart card. A basic encryption example would look something like this:

            byte[] buff = apdu.getBuffer();
            //..
            DESKey deskey = (DESKey)KeyBuilder.buildKey(KeyBuilder.TYPE_DES_TRANSIENT_DESELECT,
            KeyBuilder.LENGTH_DES3_2KEY, false);
            deskey.setKey(keyBytes, (short)0);
            Cipher cipher = Cipher.getInstance(Cipher.ALG_DES_CBC_PKCS5, false);
            cipher.init(deskey, Cipher.MODE_ENCRYPT);
            cipher.doFinal(data, (short) 0, (short) data.length,
            buff, (short) 0);

            As you can see, a dedicated key object, that is automatically cleared when the applet is deselected, is first created and then used to initialize a Cipher instance. Besides the unwieldy number of casts to short (necessary because 'classic' Java Card does not support int, but it is still the default integer type) the code is very similar to what you would find in a Java SE or Android application. Hashing uses the MessageDigest class and follows a similar routine. Using the system-provided Cipher and MessageDigest classes as building blocks it is fairly straightforward to implement CBC mode encryption and HMAC for data integrity. However as it happens, our low end SIM card does not provide usable implementations of those classes (even though the spec sheet claims they do), so we would need to start from scratch. Fortunately, since Java cards can execute arbitrary programs (as long as they fit in memory), it is also possible to include our own encryption algorithm implementation in the applet. Even better, a Java Card optimized AES implementation is freely available. This implementation provides only the basic pieces of AES -- key schedule generation and single block encryption, so some additional work is required to match the Java Cipher class functionality. The bigger downside is that by using an algorithm implemented in software we cannot take advantage of the specialized crypto co-processor most smart cards have. With this implementation our SIM (8-bit CPU, 6KB RAM) card takes about 2 seconds to process a single AES block with a 128-bit key. The performance can be improved slightly by reducing the number of AES round to 7 (10 are recommended for 128-bit keys), but that will both lower the security level of the system and result in an non-standard cipher, making testing more difficult. Another disadvantage is that native key objects are usually stored in a secured memory area that is better protected from side channel attacks, but by using our own cipher we are forced to store keys in regular byte arrays. With those caveats, this AES implementation should give us what we need for our demo application. Using the JavaCardAES class as a building block, our AES CBC encryption routine would look something like this:

            aesCipher.RoundKeysSchedule(keyBytes, (short) 0, roundKeysBuff);
            short padSize = addPadding(cipherBuff, offset, len);
            short paddedLen = (short) (len + padSize);
            short blocks = (short) (paddedLen / AES_BLOCK_LEN);

            for (short i = 0; i < blocks; i++) {
            short cipherOffset = (short) (i * AES_BLOCK_LEN);
            for (short j = 0; j < AES_BLOCK_LEN; j++) {
            cbcV[j] ^= cipherBuff[(short) (cipherOffset + j)];
            }
            aesCipher.AESEncryptBlock(cbcV, OFFSET_ZERO, roundKeysBuff);
            Util.arrayCopyNonAtomic(cbcV, OFFSET_ZERO, cipherBuff,
            cipherOffset, AES_BLOCK_LEN);
            }

            Not as concise as using the system crypto classes, but gets the job done. Finally (not shown), the IV and cipher text are copied to the APDU buffer and sent back to the caller. Decryption follows a similar pattern. One thing that is obviously missing is the MAC, but as it turns out a hash algorithm implemented in software is prohibitively slow on our SIM (mostly because it needs to access large tables stored in the slow card EEPROM). While a MAC can be also implemented using the AES primitive, we have omitted it from the sample applet. In practice tampering with the cipher text of encrypted passwords would only result in incorrect passwords, but it is still a good idea to use a MAC when implementing this on a fully functional Java Card.

            Our applet can now perform encryption and decryption, but one critical piece is still missing -- a random number generator. The Java Card API has the RandomData class which is typically used to generate key material and IVs for cryptographic operations, but just as with the Cipher class it is not available on our SIM. Therefore, unfortunately, we need to apply the DIY approach again. To keep things simple and with a (somewhat) reasonable response time, we implement a simple pseudo random number generator (PRNG) based on AES in counter mode. As mentioned above, the largest integer type in classic Java Card is short, so the counter will wrap as soon as it goes over 32767. While this can be overcome fairly easily by using a persistent byte array to simulate a long (or BigInteger if you are more ambitious), the bigger problem is that there is no suitable source of entropy on the smart card that we can use to seed the PRNG. Therefore the PRNG AES key and nonce need to be specified at applet install time and be unique to each SIM. Our simplistic PRNG implementation based on the JavaCardAES class is shown below (buff is the output buffer):

            Util.arrayCopyNonAtomic(prngNonce, OFFSET_ZERO, cipherBuff,
            OFFSET_ZERO, (short) prngNonce.length);
            Util.setShort(cipherBuff, (short) (AES_BLOCK_LEN - 2), prngCounter);

            aesCipher.RoundKeysSchedule(prngKey, (short) 0, roundKeysBuff);
            aeCipher.AESEncryptBlock(cipherBuff, OFFSET_ZERO, roundKeysBuff);
            prngCounter++;

            Util.arrayCopyNonAtomic(cipherBuff, OFFSET_ZERO, buff, offset, len);

            The recent Bitcoin app problems traced to a repeatable PRNG in Android, controversy around the Dual_EC_DRBG PRNG algorithm, which is both believed to be weak by design and is used by default in popular crypto toolkits and finally the low-quality hardware RNG found in FIPS certified smart cards have highlighted the critical impact a flawed PRNG can have on any system that uses cryptography. That is why a DIY PRNG is definitely not something you would like to use in a production system. Do find a SIM that provides working crypto classes and do use RandomData.ALG_SECURE_RANDOM to initialize the PRNG (that won't help much if the card's hardware RNG is flawed, of course).

            With that we have all the pieces needed to implement the password manager applet, and what is left is to define and expose a public interface. For Java Card this means defining the values of the CLA and INS bytes the applet can process. Besides the obviously required encrypt and decrypt commands, we also provide commands to get the current state, initialize and clear the applet.

            static final byte CLA = (byte) 0x80;
            static final byte INS_GET_STATUS = (byte) 0x1;
            static final byte INS_GEN_RANDOM = (byte) 0x2;
            static final byte INS_GEN_KEY = (byte) 0x03;
            static final byte INS_ENCRYPT = (byte) 0x4;
            static final byte INS_DECRYPT = (byte) 0x5;
            static final byte INS_CLEAR = (byte) 0x6;

            Once we have a working applet, implementing the Android client is fairly straightforward. We need to connect to the SEService, open a logical channel to our applet (AID: 73 69 6d 70 61 73 73 6d 61 6e 01) and send the appropriate APDUs using the protocol outlined above. For example, sending a string to be encrypted requires the following code (assuming we already have an open Session to the SE). Here 0x9000 is the standard ISO 7816-3/4 success status word (SW):

            Channel channel = session.openLogicalChannel(fromHex("73 69 6d 70 61 73 73 6d 61 6e 01"));
            byte[] data = "password".getBytes("ASCII");
            String cmdStr = "80 04 00 00 " + String.format("%02x", data.length)
            + toHex(data) + "00";
            byte[] rapdu = channel.transmit(fromHex(cmdStr));
            short sw = (short) ((rapdu [rapdu.length - 2] << 8) | (0xff & rapdu [rapdu.length - 1]));
            if (sw != (short)0x9000) {
            // handle error
            }
            byte[] ciphertext = Arrays.copyOf(rapdu, rapdu.length - 2);
            String encrypted= Base64.encodeToString(ciphertext, Base64.NO_WRAP);

            Besides calling applet operations by sending commands to the SE, the sample Android app also has a simple database to store encrypted passwords paired with a description, and displays currently managed passwords in a list view. Long pressing on the password name will bring up a contextual action that allows you to decrypt and temporarily display the password so you can copy it and paste it into the target application. The current implementation does not require a PIN to decrypt passwords, but one can easily by provided using Java Card's OwnerPIN class, optionally disabling the applet once a number of incorrect tries is reached. While this app can hardly compete with popular password managers, it has enough functionality to both illustrate the concept of an SE-backed app and be practially useful. Passwords can be added by pressing the '+' action item and the delete item clears the encryption key and PRNG counter, but not the PRNG seed and nonce. A screenshot of the award-winning UI is shown below. Full source code for both the applet and the Android app is available on Github.


            Summary

            The AOSP version of Android does not provide a standard API to use the SIM card as a SE, but many vendors do, and as long as the device baseband and RIL support APDU exchange, one can be added by using the SEEK for Android patches. This allows to improve the security of Android apps by using the SIM as a secure element and both store sensitive data and implement critical functionality inside it. Commercial SIM do not allow for installing arbitrary user applications, but applets can be automatically loaded by the carrier using the SIM OTA mechanism and apps that take advantage of those applets can be distributed through regular channels, such as the Play Store.

            Thanks to Michael for developing the Galaxy S2/3 RIL patch and helping with getting it to work on my somewhat exotic S2.

            Signing email with an NFC smart card on Android

            $
            0
            0
            Last time we discussed how to access the SIM card and use it as a secure element to enhance Android applications. One of the main problems with this approach is that since SIM cards are controlled by the MNO any applets running on a commercial SIM have to be approved by them. Needless to say, that considerably limits flexibility. Fortunately, NFC-enabled Android devices can communicate with practically any external contactless smart card, and you can install anything on those. Let's explore how an NFC smart card can be used to sign email on Android.

            NFC smart cards

            As discussed in previousposts, a smart card is a secure execution environment on a single chip, typically packaged in a credit-card sized plastic package or the smaller 2FF/3FF/4FF form factors when used as a SIM card. Traditionally, smart cards connect with a card reader using a number of gold-plated contact pads. The pads are used to both provide power to the card and establish serial communication with its I/O interface. Size, electrical characteristics and communication protocols are defined in the 7816 series of ISO standards. Those traditional cards are referred to as 'contact smart cards'. Contactless cards on the other hand do not need to have physical contact with the reader. They draw power and communicate with the reader using RF induction. The communication protocol (T=CL) they use is defined in ISO 14443 and is very similar to the T1 protocol used by contact cards. While smart cards that have only a contactless interface do exist, dual-interface cards that have both contacts and an antenna for RF communication are the majority. The underlying RF standard used varies by manufacturer, and both Type A and Type B are common. 

            As we know, NFC has three standard modes of operation: reader/writer (R/W), peer-to-peer (P2P) and card emulation (CE) mode. All NFC-enabled Android devices support R/W and P2P mode, and some can provide CE, either using a physical secure element (SE) or software emulation. All that is needed to communicate with a contactless smart card is the basic R/W mode, so they can be used on practically all Android devices with NFC support. This functionality is provided by the IsoDep class. It provides only basic command-response exchange functionality with the transceive() method, any higher level protocol need to be implemented by the client application.

            Securing email

            There have been quite a few new services that are trying to reinvent secure email in recent years. They are trying to make it 'easy' for users by taking care of key management and shifting all cryptographic operations to the server. As recent events have reconfirmed, introducing an intermediary is not a very good idea if communication between two parties is to be and remain secure. Secure email itself is hardly a new idea, and the 'old-school' way of implementing it relies on pubic key cryptography. Each party is responsible for both protecting their private key and verifying that the public key of their counterpart matches their actual identity. The method used to verify identity is the biggest difference between the two major secure email standards in use today, PGP and S/MIME. PGP relies on the so called 'web of trust', where everyone can vouch for the identity of someone by signing their key (usually after meeting them in person), and keys with more signatures can be considered trustworthy. S/MIME, on the other hand, relies on PKI and X.509 certificates, where the issuing authority (CA) is relied upon to verify identity when issuing a certificate. PGP has the advantage of being decentralized, which makes it harder to break the system by compromising  a single entity, as has happened with a number of public CAs in recent years. However, it requires much more user involvement and is especially challenging to new users. Additionally, while many commercial and open source PGP implementations do exist, most mainstream email clients do not support PGP out of the box and require the installation of plugins and additional software. On the other hand, all major proprietary (Outlook variants, Mail.app, etc) and open source (Thunderbird) email clients have built-in and mature S/MIME implementations. We will use S/MIME for this example because it is a lot easier to get started with and test, but the techniques described can be used to implement PGP-secured email as well. Let's first discuss how S/MIME is implemented.

            Signing with S/MIME

            The S/MIME, or Secure/Multipurpose Internet Mail Extensions, standard defines how to include signed and/or encrypted content in email messages. It specified both the procedures for creating  signed or encrypted (enveloped) content and the MIME media types to use when adding them to the message. For example, a signed message would have a part with the Content-Type: application/pkcs7-signature; name=smime.p7s; smime-type=signed-data which contains the message signature and any associated attributes. To an email client that does not support S/MIME, like most Web mail apps, this would look like an attachment called smime.p7s. S/MIME-compliant clients would instead parse and verify the signature and display some visual indication showing the signature verification status.

            The more interesting question however is what's in smime.p7s? The 'p7' stands for PKCS#7, which is the predecessor of the current Cryptographic Message Syntax (CMS). CMS defines structures used to package signed, authenticated or encrypted content and related attributes. As with most PKI X.509-derived standards, those structures are ASN.1 based and encoded into binary using DER, just like certificates and CRLs. They are sequences of other structures, which are in turn composed of yet other ASN.1 structures, which are..., basically sequences all the way down. Let's try to look at the higher-level ones used for signed email. The CMS structure describing signed content is predictably called SignedData and looks like this:

            SignedData ::= SEQUENCE {
            version CMSVersion,
            digestAlgorithms DigestAlgorithmIdentifiers,
            encapContentInfo EncapsulatedContentInfo,
            certificates [0] IMPLICIT CertificateSet OPTIONAL,
            crls [1] IMPLICIT RevocationInfoChoices OPTIONAL,
            signerInfos SignerInfos }

            Here digestAlgorithms contains the OIDs of the hash algorithms used to produce the signature (one for each signer) and encapContentInfo describes the data that was signed, and can optionally contain the actual data. The optional certificates and crls fields are intended to help verify the signer certificate. If absent, the verifier is responsible for collecting them by other means. The most interesting part, signerInfos, contains the actual signature and information about the signer. It looks like this:

            SignerInfo ::= SEQUENCE {
            version CMSVersion,
            sid SignerIdentifier,
            digestAlgorithm DigestAlgorithmIdentifier,
            signedAttrs [0] IMPLICIT SignedAttributes OPTIONAL,
            signatureAlgorithm SignatureAlgorithmIdentifier,
            signature SignatureValue,
            unsignedAttrs [1] IMPLICIT UnsignedAttributes OPTIONAL }

            Besides the signature value and algorithms used, SignedInfo contains signer identifier used to find the exact certificate that was used and a number of optional signed and unsigned attributes. Signed attributes are included when producing the signature value and can contain additional information about the signature, such as signing time. Unsigned attribute are not covered by the signature value, but can contain signed data themselves, such as counter signature (an additional signature over the signature value).

            To sum this up, in order to produce a S/MIME signed message, we need to sign the email contents and any attributes, generate the SignedInfo structure, wrap it into a SignedData, DER encode the result and add it to the message using the appropriate MIME type. Sound easy, right? Let's how this can be done on Android.

            Using S/MIME on Android

            On any platform, you need two things in order to generate an S/MIME message: a cryptographic provider that can perform the actual signing using an asymmetric key and an ASN.1 parser/generator in order to generate the SignedData structure. Android has JCE providers that support RSA, recently even with hardware-backed keys. What's left is an ASN.1 generator. While ASN.1 and DER/BER have been around for ages, and there are quite a few parsers/generators, the practically useful choices  are not that many. No one really generates code directly from the ASN.1 modules found in related standards, most libraries implement only the necessary parts, building on available components. Both of Android's major cryptographic libraries, OpenSSL and Bouncy Castle contain ASN.1 parser/generators and have support for CMS. The related API's are not public though, so we need to include our own libraries.

            As usual we turn to Spongy Castle, which is provides all of Bouncy Castle's functionality under a different namespace. In order to be able process CMS and generate S/MIME messages, we need the optional scpkix and scmail packages. The first one contains PKIX and CMS related classes, and the second one implements S/MIME. However, there is a twist: Android lacks some of the classes required for generating S/MIME messages. As you may know, Android has implementations for most standard Java APIs, with a few exceptions, most notably the GUI widget related AWT and Swing packages. Those are rarely missed, because Android has its own widget and graphics libraries. However, besides widgets AWT contains classes related to MIME media types as well. Unfortunately, some of those are  used in libraries that deal with MIME objects, such as JavaMail and the Bouncy Castle S/MIME implementation. JavaMail versions that include alternative AWT implementations, repackaged for Android have been available for some time, but since they use some non-standard package names, they are not a drop-in replacement. That applies to Spongy Castle as well: some source code modifications are required in order to get scmail to work with the javamail-android library.

            With that sorted out, generating an S/MIME message on Android is just a matter of finding the signer key and certificate and using the proper Bouncy Castle and JavaMail APIs to generate and send the message:

            PrivateKey signerKey = KeyChain.getPrivateKey(ctx, "smime");
            X509Certificate[] chain = KeyChain.getCertificateChain(ctx, "smime");
            X509Certificate signerCert = chain[0];
            X509Certificate caCert = chain[1];

            SMIMESignedGenerator gen = new SMIMESignedGenerator();
            gen.addSignerInfoGenerator(new JcaSimpleSignerInfoGeneratorBuilder()
            .setProvider("AndroidOpenSSL")
            .setSignedAttributeGenerator(
            new AttributeTable(signedAttrs))
            .build("SHA512withRSA", signerKey, signerCert));
            Store certs = new JcaCertStore(Arrays.asList(signerCert, caCert));
            gen.addCertificates(certs);

            MimeMultipart mm = gen.generate(mimeMsg, "SC");
            MimeMessage signedMessage = new MimeMessage(session);
            Enumeration headers = mimeMsg.getAllHeaderLines();
            while (headers.hasMoreElements()) {
            signedMessage.addHeaderLine((String) headers.nextElement());
            }
            signedMessage.setContent(mm);
            signedMessage.saveChanges();

            Transport.send(signedMessage);

            Here we first get the signer key and certificate using the KeyChain API and then create an S/MIME generator by specifying the key, certificate, signature algorithm and signed attributes. Note that we specify the AndroidOpenSSL provider explicitly which is the only one that can use hardware-backed keys. This is only required if you changed the default provider order when installing Spongy Castle, by default AndroidOpenSSL is the preferred JCE provider. We then add the certificates we want to include in the generated SignedData and generate a multi-part MIME message that includes both the original message (mimeMsg) and the signature. Finally we send the message using the JavaMail Transport class. The JavaMail Session initialization is omitted from the example above, see the sample app for how to set it up to use Gmail's SMTP server. This requires the Gmail account password to be specified, but with a little more work it can be replaced with an OAuth token you can obtain from the system AccountManager.

            So what about smart cards?

            Using a MuscleCard to sign email

            In order to sign email using keys stored on a smart card we need a few things: 
            • a dual-interface smart cards that supports RSA keys
            • a crypto applet that allows us to sign data with those keys
            • some sort of middleware that exposes card functionality through a standard crypto API
            Most recent dual-interface JavaCards fulfill our requirements, but we will be using a NXP J3A081 which supports JavaCard 2.2.2 and 2048-bit RSA keys. When it comes to open source crypto applets though, unfortunately the choices are quite limited. Just about the only one that is both full-featured and well supported in middleware libraries is the venerable MuscleCard applet. We will be using one of the fairly recent forks, updated to support JavaCard 2.2 and extended APDUs. To load the applet on the card you need a GlobalPlatform-compatible loader application, like GPJ, and of course the CardManager keys. Once you have initialized it, you can personalize it by generating or importing keys and certificates. After that the card can be used in any application that supports PKCS#11, for example Thunderbird and Firefox. Because the card is dual-interface, practically any smart card reader can be used on desktops. When the OpenSC PKCS#11 module is loaded in Thunderbird the card will show up in the Security Devices dialog like this:


            If the certificate installed in the card has your email in the Subject Alternative Name extension, you should be able send signed and encrypted emails (if you have the recipient's certificate, of course). But how to achieve the same thing in Android?

            Using MuscleCard on Android

            Android doesn't support PKCS#11 modules, so in order to expose the cards crypto functionality we could implement a custom JCE provider that provides card-backed implementations of the Signature and KeyStrore engine classes. That is quite a bit of work though, and since we are only targeting the Bouncy Castle S/MIME API, we can get away by implementing the ContentSigner interface. It provides an OutputStream clients write data to be signed to, an AlgorithmIdentifer for the signature method used and a getSignature() method that returns the actual signature value. Our MuscleCard-backed implementation could look like this:

            class MuscleCardContentSigner implements ContentSigner {

            private ByteArrayOutputStream baos = new ByteArrayOutputStream();
            private MuscleCard msc;
            private String pin;
            ...
            @Override
            public byte[] getSignature() {
            msc.select();
            msc.verifyPin(pin);

            byte[] data = baos.toByteArray();
            baos.reset();
            return msc.sign(data);
            }
            }

            Here the MuscleCard class is our 'middleware' and encapsulates the card's RSA signature functionality. It is implemented by sending the required command APDUs for each operation using Android's IsoDep API and aggregating and converting the result as needed. For example, the verifyPin() is implemented like this:

            class MuscleCard {

            private IsoDep tag;

            public boolean verifyPin(String pin) throws IOException {
            String cmd = String.format("B0 42 01 00 %02x %s", pin.length(),
            toHex(pin.getBytes("ASCII")));
            ResponseApdu rapdu = new ResponseApdu(tag.transceive(fromHex(cmd)));
            if (rapdu.getSW() != SW_SUCCESS) {
            return false;
            }

            return true;
            }
            }

            Signing is a little more complicated because it involves creating and updating temporary I/O objects, but follows the same principle. Since the applet does not support padding or hashing, we need to generate and pad the PKCS#1 (or PSS) signature block on Android and send the complete data to the card. Finally, we need to plug our signer implementation into the Bouncy Castle CMS generator:

            ContentSigner mscCs = new MuscleCardContentSigner(muscleCard, pin);
            gen.addSignerInfoGenerator(new JcaSignerInfoGeneratorBuilder(
            new JcaDigestCalculatorProviderBuilder()
            .setProvider("SC")
            .build()).build(mscCs, cardCert));

            After that the signed message can be generated exactly like when using local key store keys. Of course, there are a few caveats. Since apps cannot control when an NFC connection is established, we can only sign data after the card has been picked up by the device and we have received an Intent with a live IsoDep instance. Additionally, since signing can take a few seconds, we need to make sure the connection is not broken by placing the device on top of the card (or use some sort of awkward case with a card slot). Our implementation also takes a few shortcuts by hard-coding the certificate object ID and size, as well as the card PIN, but those can be remedied with a little more code. The UI of our homebrew S/MIME client is shown below.


            After you import a PKCS#12 file in the system credential store you can sign emails using the imported keys. The 'Sign with NFC' button is only enabled when a compatible card has been detected. The easiest way to verify the email signature is to send a message to a desktop client that supports S/MIME. There are also a few Android email apps that support S/MIME, but setup can be a bit challenging because they often use their own trust and key stores. You can also dump the generated message to external storage using MimeMessage.writeTo() and then parse the CMS structure using the OpenSSL cms command:

            $ openssl cms -cmsout -in signed.message -noout -print
            CMS_ContentInfo:
            contentType: pkcs7-signedData (1.2.840.113549.1.7.2)
            d.signedData:
            version: 1
            digestAlgorithms:
            algorithm: sha512 (2.16.840.1.101.3.4.2.3)
            parameter: NULL
            encapContentInfo:
            eContentType: pkcs7-data (1.2.840.113549.1.7.1)
            eContent: <absent>
            certificates:
            d.certificate:
            cert_info:
            version: 2
            serialNumber: 4
            signature:
            algorithm: sha1WithRSAEncryption (1.2.840.113549.1.1.5)
            ...
            crls:
            <empty>
            signerInfos:
            version: 1
            d.issuerAndSerialNumber:
            issuer: C=JP, ST=Tokyo, CN=keystore-test-CA
            serialNumber: 3
            digestAlgorithm:
            algorithm: sha512 (2.16.840.1.101.3.4.2.3)
            parameter: NULL
            signedAttrs:
            object: contentType (1.2.840.113549.1.9.3)
            value.set:
            OBJECT:pkcs7-data (1.2.840.113549.1.7.1)

            object: signingTime (1.2.840.113549.1.9.5)
            value.set:
            UTCTIME:Oct 25 16:25:29 2013 GMT

            object: messageDigest (1.2.840.113549.1.9.4)
            value.set:
            OCTET STRING:
            0000 - 88 bd 87 84 15 53 3d d8-72 64 c7 36 f8 .....S=.rd.6.
            000d - b0 f3 39 90 b2 a4 77 56-5c 9f e4 2e 7c ..9...wV\...|
            001a - 7d 2e 0b 08 b4 b7 e7 6c-e9 b6 61 00 13 }......l..a..
            0027 - 25 62 69 2a bc 08 5b 4c-4f c9 73 cf d3 %bi*..[LO.s..
            0034 - c6 1e 51 c2 5f c1 64 77-3b 45 e2 cb ..Q._.dw;E..
            signatureAlgorithm:
            algorithm: rsaEncryption (1.2.840.113549.1.1.1)
            parameter: NULL
            signature:
            0000 - a0 d0 ce 35 46 8c f9 cd-e5 db ed d8 e3 f0 08 ...5F..........
            ...
            unsignedAttrs:
            <empty>

            Email encryption using the NFC smart card can be implemented in a similar fashion, but this time the card will be required when decrypting the message.

            Summary

            Practically all NFC-enabled Android devices can be used to communicate with a contactless or dual-interface smart card. If the interface of card applications is known, it is fairly easy to implement an Android component that exposes card functionality via a custom interface, or even as a standard JCE provider. The card's cryptographic functionality can then be used to secure email or provide HTTPS and VPN authentication. This could be especially useful when dealing with keys that have been generated on the card and cannot be extracted. If a PKCS#12 backup file is available, importing the file in the system credential store can provide a better user experience and comparable security levels if the device has a hardware-backed credential store. 

            Unlocking Android devices using an OTP via NFC

            $
            0
            0
            Our last post showed how to use a contactless smart card to sign email on Android. While storing cryptographic keys used with PKI or PGP is one of the main use cases for smart cards, other usages are gaining popularity as well. Additionally, the traditional 'card' format has evolved and there are different devices that embed a secure element (basically, the smart card chip), and make its functionality available without requiring a bulky card reader. One popular and affordable device that embeds a secure element is the YubKey Neo from Yubico. In this post we'll show how you can use the YubiKey Neo to unlock your Android device over NFC.

            One-time passwords

            Before we discuss how the YubiKey NEO can be used to unlock an Android device, let's say a few words about OTPs. As the name implies, one-time passwords are passwords that are valid for a single login or transaction. OTPs can be generated based on an algorithm that derives each next password from the previous one, or by using some sort of challenge-response mechanism. Another approach is to use a shared secret, called a seed, along with some dynamic value such as a counter or a value derived from the current time. While OTP generation based on a shared seed is usually fairly easy to implement, the dynamic values at the OTP token (called a prover) and the verifier (authentication server) can get out of sync and validation algorithms need to account for that. 

            Many OTP schemes are proprietary and incompatible with each other. Fortunately, widely adopted open standards exist as well, most notably the HMAC-based One Time Password (HOTP) algorithm developed by the Initiative for Open Authentication (OATH). HOTP uses a secret key and a counter as input to the HMAC-SHA1 message authentication code (MAC) algorithm, truncates the calculated MAC value and converts it to a to human readable code, usually a 6-digit number. A later variation is the TOTP (Time-Based One-Time Password) algorithm, which substitutes the counter for a value derived from the current Unix time (i.e., the number of seconds since midnight of January 1, 1970 UTC). The derived value T, is calculated using an initial time T0 and a step X as follows: T = (Current Unix time - T0) / X. Each generated OTP is valid for X seconds, by default 30. TOTP is used by Google Authenticator and the Yubico OATH applet which we will use in our demo.

            YubiKey Neo

            The original YubiKey (now called YubiKey Standard), was an innovative token for two-factor authentication (2FA). It has a USB interface and presents itself as a USB keyboard when pulgged in, and thus does not require any special drivers to use. It has a single capacitive button that outputs an OTP when pressed. Because the device functions as keyboard, the OTP can be automatically entered in any text field of a desktop or Web application, or even terminal window, requiring very little modification to exiting applications. The OTP is generated using a 128-bit key stored inside the device, either using Yubico's OTP algorithm, or the HOTP algorithm.

            The YubiKey Neo retains the form factor of the original YubiKey, but adds an important new component: a secure element (SE), accessible both via USB and over NFC. The SE offers a JavaCard 3.0/JCOP 2.4.2-compatible execution environment, an ISO14443A NFC interface, Mifare Classic emulation and an NDEF applet for interaction with Yubikey functionality. When plugged into a USB port, depending on its configuration, the Neo presents itself either as a keybord (HID device), a standard CCID smart card reader, or both when in composite mode. As the SE is fully compatible with JavaCard and GlobalPlatform standards, additional applets can be loaded with standard tools. Recent batches ship with pre-installed  OATH, PGP and PIV applets, and the code for both the OATH and PGP applets is available. Yubico provides a Google Authenticator compatible Android application, Yubico Authenticator that allows you to store the keys used to generate OTPs on the Neo. This ensures that neither attackers who have physical access to your Android device, nor applications with root access can extract your OTP keys. 

            The Android lockscreen

            Before we can figure out how to unlock an Android device using an OTP we need to understand how the lockscreen works. The lockscreen is formally known as the keyguard and is implemented much like regular Android applications: with widgets laid out on a window. What makes it special is that its window lives on a very high window layer that other applications cannot draw on top of or get control over. Additionally, the keyguard intercepts the normal navigation buttons, making it impossible to bypass and thus 'locking' the device. The keyguard window layer is not the highest layer however: dialogs originating from the keyguard itself, and the status bar, can be drawn over the keyguard. You can see a list of the currently shown windows using the Hierarchy Viewer tool available with the ADT. When the screen is locked the active windows is the Keyguard window, as shown in the screenshot below.

            Before Android 4.0, it was possible for third-party applications to show windows in the keyguard layer, and this approach was often used in order to intercept the Home button and implement 'kiosk' style applications. Since Android 4.0 however, adding windows to the keyguard layer requires the INTERNAL_SYSTEM_WINDOW signature permission, which is available only to system applications.

            For a long time the keyguard was an implementation detail of Android's window system and was not separated into a dedicated component. With the introduction of lockscreen widgets, dreams (i.e., screensavers) and support for multiple users, the keyguard gained quite a lot of functionality and was eventually extracted in a dedicated system application, Keyguard, in Android 4.4. The Keyguard app lives in the com.android.systemui process, along with the core Android UI implementation. Most importantly for our purposes, the Keyguard app includes a service with a remote interface, IKeyguardService. This service allows its clients to check the current state of the keyguard, set the current user, launch the camera and hide or disable the keyguard. As can be expected, operations that change the state of the keyguard are protected by a system signature permission, CONTROL_KEYGUARD.

            Unlocking the keyguard

            Stock Android provides three main methods to unlock the keyguard: by drawing a pattern, by entering a PIN or password, or by using image recognition, aka Face Unlock, also referred to as 'weak biometric'. The pattern, PIN and passphrase methods are essentially equivalent: they compare the hash of the user input to a hash stored on the device and unlock it if the values match. The hash for the pattern lock is stored in /data/system/gesture.key as an unsalted SHA-1 value. The hash of the PIN/password is a combination of the SHA-1 and MD5 hash values of  the user input, salted with a random value. It is stored in the /data/misc/password.key file. The Face Unlock implementation is proprietary and no details are available about the format of the stored data. Normally not visible to the user are the Google account password unlock method (used when the device is locked after too many incorrect unlock attempts) and the unlock method that uses the PIN or PUK of the SIM card. The Google unlock method uses the proprietary Google Login Service to verify the entered password, and the PIN/PUK method simply sends commands to the SIM card via the RIL interface.

            As you can see, all unlock methods are based on a fixed PIN, password or pattern. Except in the case of a long and complex password, which is rather hard to input on a touchscreen keyboard, all unlock secrets usually have low entropy and can easily be guessed or bruteforced. Android partially protects against such attacks by permanently locking the device after too many unsuccessful attempts. Additionally security polices introduced by a device administrator application can enforce PIN/password complexity rules and even wipe the device after too many unsuccessful attempts.

            One approach to improve the security of the keyguard is to use an OTP in order to unlock the device. While this is not directly supported by Android, it can be implemented on production devices by using a device administrator application that periodically changes the unlock PIN or password using the DevicePolicyManager API. One such application is TimePIN (which this post was in part inspired by) which sets the unlock password based on the current time. TimePIN allows you to set different modifiers that are applied when calculating the current PIN. Modifiers can be stacked, so the transformation can become complex, but still easy to remember. A secret component, called an offset can be mixed in for added security.

            Unlocking via NFC

            Authentication methods are usually based on something you know, something only you have, or a combination of the two (two-factor authentication, 2FA). The pattern and PIN/password unlock methods are based on something you know, and Face Unlock can be thought of as based on something you have (your face or a really good picture). However, Face Unlock allows for a fallback to PIN or password when it cannot detect a face, so it can still be unlocked by something you know.

            An alternative way to use something you have to unlock the device is to use an NFC tag. This is not supported by stock Android, but is implemented in some devices, for example the Motorola X (marketed as Motorola Skip). While the Motorola Skip is a proprietary solution and no implementation details are available, apps that offer similar functionality such as NFC LockScreenOff Enabler compare the UID of the read tag to a list of stored values and unlock the device if the UID is in the list. While this is fairly secure as the UID of most NFC tags is read-only, cards that allow for UID modification are available, and a programmable NFC card emulator can emit any UID.

            One problem with implementing NFC unlock is that by default Android does not scan for NFC devices when the screen is turned off or locked. This is intended as a security measure, because if the device reads NFC tags while the screen is off, vulnerabilities can be triggered without physical access to the device or the owner noticing, as has been demonstrated. NFC LockScreenOff Enabler and similar applications can get around this limitation when running on rooted devices by installing hooks into system methods, thus allowing the NFC system service configuration to be modified at runtime.

            Unlocking using the YubiKey Neo

            As we mentioned in the 'YubiKey Neo' section, Yubico provides both a JavaCard applet and a companion Android app that together implement TOTP compatible with Google Authenticator. The Yubico Authenticator app is initialized just like its Google counterpart -- either manually or by scanning a QR code. The difference is that the Yubico Authenticator saves the OTP seed on the device only temporarily, and once it's written to the Neo, deletes it. To display the current OTP, you need to touch the Neo while the app is active, and touch it again after the OTPs expire. If you don't want to enter keys and accounts manually you can use a QR code generator such as the one provided by the ZXing project to generate a URI that includes an account name and seed. The URI format is available on the Google Authenticator Wiki.

            While unlocking the keyguard certainly doesn't need the full functionality of the Google Authenticator app, displaying the current OTP is useful for debugging and initializing with a QR code is quite convenient. That's why for our demo we will simply modify the Authenticator app slightly, instead of writing another OTP source. As we need to provide the OTP to the system NFC service, which runs in a different process, we add a remote AIDL service with a single method that returns the current OTP:

            interface IRemoteOtpSource {

            String getNextCode(String accountName);

            }

            The NFC service can then bind to the OTP service that implements this interface and retrieve the current OTP. Of course, providing the OTP to everyone is not a great idea, so we protect the service with a signature permission that can only be granted to system apps by signing our  RemoteAuthenticator app with the platform certificate:

            <manifest ...>
            ...
            <permission
            android:name="com.google.android.apps.remoteauthenticator.GET_OTP_CODE"
            android:protectionlevel="signature"/>
            ...
            <application ...>
            ...
            <service android:enabled="true" android:exported="true"
            android:name="com.google.android.apps.authenticator.OtpService"
            android:permission="com.google.android.apps.remoteauthenticator.GET_OTP_CODE">
            </service>
            </application>

            </manifest>

            The full source code of the RemoteAuthenticator app is available on Github. Once installed, the app needs to be initialized with the same key and account name as the OATH applet on the YubiKey Neo. Our sample NFC unlock implementations looks for an account named 'lockscreen' when it detects the OATH applet. The interface of the modified app is identical to that of Google Authenticator:



            Before we can use an NFC tag to unlock the keyguard, we need to make sure the system NFC service can detect NFC tags even when the keyguard is locked. As we mentioned earlier, that is not the case in stock Android, so we change the default polling mode from SCREEN_STATE_ON_UNLOCKED to SCREEN_STATE_ON_LOCKED in NfcService.java:

            package com.android.nfc;
            ...

            public class NfcService implements DeviceHostListener {
            ...
            /** minimum screen state that enables NFC polling (discovery) */
            static final int POLLING_MODE = SCREEN_STATE_ON_LOCKED;
            ...

            }

            With this done, we can hook into the NFC service tag dispatch sequence, and, borrowing some code from the Yubico Authenticator app, check whether the scanned tag includes an OATH applet. If so, we read out the current OTP and compare it with the OTP returned by the RemoteAuthenticator app installed on the device. If the OTPs match, we dismiss the keyguard and let the dispatch continue. If the tag doesn't contain an OTP applet, or the OTPs don't match, we do not dispatch the tag. To unlock the keyguard we simply call the keyguardDone() method of the system KeyguardService. The unlock process might look something like this:



            Full source code for the modified NFC service is available on Github (in the 'otp-unlock' branch). Note that while this demo implementation handles basic error cases like OATH applet not found or connection with tag lost, it is not particularly robust. It only tries to connect to remote services once, and if  either of them is unavailable, NFC unlock is disabled altogether. It doesn't provide any visual indication that NFC unlock is happening either, the keyguard simply disappears as seen in the video above. Another missing piece is multi-user support: in order to support multiple users, the code needs to look for the current users's account on the NFC device, and not for a hardcoded name. Finally, the NFC unlock as currently implemented is not a full unlock method: it cannot be selected in the Screen security settings, but simply supplements the currently selected unlock method.

            Summary

            As of Android 4.4, the Android keyguard can be queried by third party applications and dismissed by apps that hold the CONTROL_KEYGUARD permission. This makes it easy to implement alternative unlock mechanisms, such as NFC unlock. However, NFC tag polling is disabled by default when the screen is locked, so adding an NFC unlock mechanism requires modifying the system NFC service. For added security, NFC unlock methods should rely not only on the UID of the scanned tag, but on some secret information that is securely stored inside the tag. This could be a private key for use in some sort of signature-based authentication scheme, or an OTP seed. An easy way to implement OTP-based NFC unlock is to use the Yubico OATH applet, pre-installed on the YubiKey Neo, along with a modified Google Authenticator app that offers a remote interface to read the current OTP. 

            Android Security Internals

            $
            0
            0
            If you have been following this blog for a while, you might have noticed that there haven't been many new posts in the past few months. There are two reasons for this: me being lazy and me working on a book. The books is progressing nicely, but is a still a long way from being finished, so updates will probably continue to be spotty for a while.

            What is this all about?

            The book is a continuation of my quest to understand how Android works and, as you may have guessed already, is called "Android Security Internals". That's a somewhat ambitious title, but it reflects my goal -- to present both an overview of Android's security architecture, and to show how its key components are implemented and interoperate. Meeting this goal requires starting with the most fundamental concepts such as Binder IPC, sandboxing, file ownership and permissions, and looking into key system services that bind the OS together, such as the PackageManagerService and ActivityManagerService. After (hopefully) explaining the fundamentals in sufficient detail, the book goes on to discuss higher level features such as credential storage, account management and device policy support. Security features added in recent versions, for example SELinux and verified boot are also introduced. While the book does cover topics traditionally associated with 'rooting' such as unlocking the bootloader, recovery images and superuser apps, this is not a main topic. Finding and developing exploits in order to gain root access is not discussed at all, so if you are interested in these topics you might want to pick up the recently released Android Hacker's Handbook, which covers them very well and in ample detail. Finally, almost all of the material is based on analysis of and experimentation with AOSP source code, and thus almost no vendor extensions or non-open source features are covered.

            The book

            The book is being produced by No Starch Press, who have a long history of publishing great technical books, and have lately been introducing some truly beautiful Lego books as well. On top of that, they are a real pleasure to work with, so do call them first if you ever consider writing a book. 

            The book is scheduled for September 2014, hopefully I'll be able to finish it on time to meet that date. If that sounds like a long wait, there is good news: the book is available via No Starch's Early Access program and you can read the first couple of chapters right now. New chapters will be made available once they are ready. While there is still a lot of work to be done, the book does already have a cover, and a great on at that: 

            While I can't discuss progress in detail, the better part of the book is done and is in various stages of editing and review. Here is the current table of contents, subject to change, of course, but probably nothing too drastic.

            Table of contents

            Chapter 1: Android Security Model Overview
            Chapter 2: Permissions
            Chapter 3: Package Management
            Chapter 4: User Management
            Chapter 5: Cryptographic Providers (Available in Early Access)
            Chapter 6: Network Security and PKI
            Chapter 7: Credential Storage (Available in Early Access)
            Chapter 8: Online Account Management
            Chapter 9: Enterprise Security
            Chapter 10: Device Security
            Chapter 11: NFC and Secure Elements
            Chapter 12: SELinux
            Chapter 13: Development Device Security and Root Access

            If you have found this blog interesting or helpful at one time or another, hopefully this book is for you. While some of the material is based on previous blog posts, it has been largely re-written and extended, and most importantly professionally edited (thanks Bill!) and reviewed (thanks Kenny!), so it should be both much easier to read and more accurate. Most of the material is completely new and written exclusively for the book.

            That's it for now, major updates will be posted here, more minor ones via my Google+ account. Finally, do follow No Starch Press on Twitter or subscribe to their newsletter to get updates about upcoming books an Early Access releases.

            Using KitKat verified boot

            $
            0
            0
            Android 4.4 introduced a number of security enhancements, most notably SELinux in enforcing mode. One security feature that initially got some press attention, because it was presumably aiming to 'end all custom firmware', but hasn't been described in much detail, is verified boot. This post will briefly explain how verified boot works and then show how to configure and enable it on a Nexus device.

            Verified boot with dm-verity

            Android's verified boot implementation is based on the dm-verity device-mapper block integrity checking target. Device-mapper is a Linux kernel framework that provides a generic way to implement virtual block devices. It is used to implement volume management (LVM), full-disk encryption (dm-crypt), RAIDs and even distributed replicated storage (DRBD). Device-mapper works by essentially mapping a virtual block device to one or more physical block devices, optionally modifying transferred data in transit. For example, dm-crypt decrypts read physical blocks and encrypts written blocks before committing them to disk. Thus disk encryption is transparent to users of the virtual dm-crypt block device. Device-mapper targets can be stacked on top of each other, making it possible to implement complex data transformations. 

            As we mentioned, dm-verity is a block integrity checking target. What this means is that it transparently verifies the integrity of each device block as it is being read from disk. If the block checks out, the read succeeds, and if not -- the read generates an I/O error as if the block was physically corrupt. Under the hood dm-verity is implemented using a pre-calculated hash tree which includes the hashes of all device blocks. The leaf nodes of the tree include hashes of physical device blocks, while intermediate nodes are hashes of their child nodes (hashes of hashes). The root node is called the root hash and is based on all hashes in lower levels (see figure below). Thus a change even in a single device block will result in a change of the root hash. Therefore in order to verify a hash tree we only need to verify its root hash. At runtime dm-verity calculates the hash of each block when it is read and verifies it using the pre-calculated hash tree. Since reading data from a physical device is already a time consuming operation, the latency added by hashing and verification as relatively low.

            [Image from Android dm-verity documentation,  licensed under Creative Commons Attribution 2.5]

            Because dm-verity depends on a pre-calculated hash tree over all blocks of a device, the underlying device needs to be mounted read-only for verification to be possible. Most filesystems record mount times in their superblock or similar metadata, so even if no files are changed at runtime, block integrity checks will fail if the underlying block device is mounted read-write. This can be seen as a limitation, but it works well for devices or partitions that hold system files, which are only changed by OS updates. Any other change indicates either OS or disk corruption, or a malicious program that is trying to modify the OS or masquerade as a system file. dm-verity's read-only requirement also fits well with Android's security model, which only hosts application data on a read-write partition, and keeps OS files on the read-only system partition.

            Android implementation

            dm-verity was originally developed in order to implement verified boot in Chrome OS, and was integrated into the Linux kernel in version 3.4. It is enabled with the CONFIG_DM_VERITY kernel configuration item. Like Chrome OS, Android 4.4 also uses the kernel's dm-verity target, but the cryptographic verification of the root hash and mounting of verified partitions are implemented differently from Chrome OS.

            The RSA public key used for verification is embedded in the boot partition under the verity_key filename and is used to verify the dm-verity mapping table. This mapping table holds the locations of the target device and the offset of the hash table, as well as the root hash and salt. The mapping table and its signature are part of the verity metablock which is written to disk directly after the last filesystem block of the target device. A partition is marked as verifiable by adding the verify flag to the Android-specific fs_mgr flags filed of the device's fstab file. When Android's filesystem manager encounters the verify flag in fstab, it loads the verity metadata from the block device specified in fstab and verifies its signature using the verity_key. If the signature check succeeds, the filesystem manager parses the dm-verity mapping table and passes it to the Linux device-mapper, which use the information contained in the mapping table in order to create a virtual dm-verity block device. This virtual block device is then mounted at the mount point specified in fstab in place of the corresponding physical device. As a result, all reads from the underlying physical device are transparently verified against the pre-generated hash tree. Modifying or adding files, or even remounting the partition in read-write mode, results in an integrity verification failure and an I/O error.

            We must note that as dm-verity is a kernel feature, in order for the integrity protection it provides to be effective, the kernel the device boots needs to be trusted. On Android, this means verifying the boot partition, which also includes the root filesystem RAM disk (initrd) and the verity public key. This process is device-specific and is typically implemented in the device bootloader, usually by using an unmodifiable verification key stored in hardware to verify the boot partition's signature.

            Enabling verified boot

            The official documentation describes the steps required to enable verified boot on Android, but lacks concrete information about the actual tools and commands that are needed. In this section we show the commands required to create and sign a dm-verity hash table and demonstrate how to configure an Android device to use it. Here is a summary of the required steps: 
            1. Generate a hash tree for that system partition.
            2. Build a dm-verity table for that hash tree.
            3. Sign that dm-verity table to produce a table signature.
            4. Bundle the table signature and dm-verity table into verity metadata.
            5. Write the verity metadata and the hash tree to the system parition.
            6. Enable verified boot in the devices's fstab file.
            As we mentioned earlier, dm-verity can only be used with a device or partition that is mounted read-only at runtime, such as Android's system partition. While verified boot can be applied to other read-only partition's such as those hosting proprietary firmware blobs, this example uses the system partition, as protecting OS files results in considerable device security benefits. 

            A dm-verity hash tree is generated with the dedicated veritysetup program. veritysetup can operate directly on block devices or use filesystem images and write the hash table to a file. It is supposed to produce platform-independent output, but hash tables produced on desktop Linux didn't quite agree with Android, so for this example we'll generate the hash tree directly on the device. To do this we first need to compile veritysetup for Android. A project that generates a statically linked veritysetup binary is provided on Github. It uses the OpenSSL backend for hash calculations and has only been slightly modified (in a not too portable way...), to allow for the different size of the off_t data type, which is 32-bit in current versions of Android's bionic library. 

            In order to add the hash tree directly to the system partition, we first need to make sure that there is enough space to hold the hash tree and the verity metadata block (32k) after the last filesystem block. As most devices typically use the whole system partition, you may need to modify the BOARD_SYSTEMIMAGE_PARTITION_SIZE value in your device's BoardConfig.mk to allow for storing verity data. After you have adjusted the size of the system partition, transfer the veritysetup binary to the cache or data partitions of the device, and boot a recovery that allows root shell access over ADB. To generate and write the hash tree to the device we use the veritysetup format command as shown below.

            # veritysetup --debug --hash-offset 838893568 --data-blocks 204800 format \
            /dev/block/mmcblk0p21 /dev/block/mmcblk0p21
            ...
            # Updating VERITY header of size 512 on device /dev/block/mmcblk0p21, offset 838893568.
            VERITY header information for /dev/block/mmcblk0p21
            UUID: 0dd970aa-3150-4c68-abcd-0b8286e6000
            Hash type: 1
            Data blocks: 204800
            Data block size: 4096
            Hash block size: 4096
            Hash algorithm: sha256
            Salt: 1f951588516c7e3eec3ba10796aa17935c0c917475f8992353ef2ba5c3f47bcb
            Root hash: 5f061f591b51bf541ab9d89652ec543ba253f2ed9c8521ac61f1208267c3bfb1

            This example was executed on a Nexus 4, make sure you use the correct block device for your phone instead of /dev/block/mmcblk0p21. The --hash-offset parameter is needed because we are writing the hash tree to the same device that holds filesystem data. It is specified in bytes (not blocks) and needs to point to a location after the verity metadata block. Adjust according to your filesystem size so that hash_offset > filesystem_size + 32k. The next parameter, --data-blocks, specifies the number of blocks used by the filesystem. The default block size is 4096, but you can specify a different size using the --data-block-size parameter. This value needs to match the size allocated to the filesystem with BOARD_SYSTEMIMAGE_PARTITION_SIZE. If the command succeeds it will output the calculated root hash and the salt value used, as shown above. Everything but the root hash is saved in the superblock (first block) of the hash table. Make sure you save the root hash, as it is required to complete the verity setup.

            Once you have the root hash and salt, you can generate and sign the dm-verity table. The table is a single line that contains the name of the block device, block sizes, offsets, salt and root hash values. You can use the gentable.py script (edit constant values accordingly first) to generate it or write it manually based on the output of veritysetup. See dm-verity's documentation for details about the format. For our example it looks like this (single line, split for readability):

            1 /dev/block/mmcblk0p21 /dev/block/mmcblk0p21 4096 4096 204800 204809 sha256 \
            1f951588516c7e3eec3ba10796aa17935c0c917475f8992353ef2ba5c3f47bcb \
            5f061f591b51bf541ab9d89652ec543ba253f2ed9c8521ac61f1208267c3bfb1

            Next, generate a 2048-bit RSA key and sign the table using OpenSSL. You can use the command bellow or the sign.sh script on Github.

            $ openssl dgst -sha1 -sign verity-key.pem -out table.sig table.bin

            Once you have a signature you can generate the verity metadata block, which includes a magic number (0xb001b001) and the metadata format version, followed by the RSA PKCS#1.5 signature blob and table string, padded with zeros to 32k. You can generate the metadata block with the mkverity.py script by passing the signature and table files like this:

            $ ./mkverity.py table.sig table.bin verity.bin

            Next, write the generated verity.bin file to the system partition using dd  or a similar tool, right after the last filesystem block and before the start of the verity hash table. Using the same number of data blocks passed to veritysetup, the needed command (which also needs to be executed in recovery) becomes:

            # dd if=verity.bin of=/dev/block/mmcblk0p21 bs=4096 seek=204800

            Finally, you can check that the partition is properly formatted using the veirtysetup verify command as shown below, where the last parameter is the root hash:

            # veritysetup --debug --hash-offset 838893568 --data-blocks 204800 verify \
            /dev/block/mmcblk0p21 /dev/block/mmcblk0p21 \
            5f061f591b51bf541ab9d89652ec543ba253f2ed9c8521ac61f1208267c3bfb1

            If verification succeeds, reboot the device and verify that the device boots without errors. If it does, you can proceed to the next step: add the verification key to the boot image and enable automatic integrity verification.

            The RSA public key used for verification needs to be in mincrypt format (also used by the stock recovery when verifying OTA file signatures), which is a serialization of mincrypt's RSAPublicKey structure. The interesting thing about this structure is that ts doesn't simply include the modulus and public exponent values, but contains pre-computed values used by mincrypt's RSA implementation (based on Montgomery reduction). Therefore converting an OpenSSL RSA public key to mincrypt format requires some modular operations and is not simply a binary format conversion. You can convert the PEM key using the pem2mincrypt tool (conversion code shamelessly stolen from secure adb's implementation). Once you have converted the key, include it in the root of your boot image under the verity_key filename. The last step is to modify the device's fstab file in order to enable block integrity verification for the system partition. This is simply a matter of adding the verify flag, as shown below:

            /dev/block/platform/msm_sdcc.1/by-name/system  /system  ext4  ro, barrier=1  wait,verify

            Next, verify that your kernel configuration enable CONFIG_DM_VERITY, enable it if needed and build your boot image. Once you have boot.img, you can try booting the device with it using fastboot boot boot.img (without flashing it). If the hash table and verity metadata blcok have been generated and written correctly, the device should boot, and /system should be a mount of the automatically created device-mapper virtual device, as shown below. If the boot is successful, you can permanently flash the boot image to the device.

            # mount|grep system
            /dev/block/dm-0 /system ext4 ro,seclabel,relatime,data=ordered 0 0

            Now any modifications to the system partition will result in read errors when reading the corresponding file(s). Unfortunately, system modifications by file-based OTA updates, which modify file blocks without updating verity metadata, will also invalidate the hash tree. As mentioned in the official documentation, in order to be compatible with dm-verity verified boot, OTA updates should also operate at the block level, ensuring that both file blocks and the hash tree and metadata are updated. This requires changing the current OTA update infrastructure, which is probably one of the reasons verified boot hasn't been deployed to production devices yet.

            Summary

            Android includes a verified boot implementation based on the dm-verity device-mapper target since version 4.4. dm-verity is enabled by adding a hash table and a signed metadata block to the system partition and specifying the verify flag in the device's fstab file. At boot time Android verifies the metadata signature and uses the included device-mapper table to create and mount a virtual block device at /system. As a result, all reads from /system are verified against the dm-verity hash tree, and any modification to the system partition results in I/O errors. 


            Secure voice communication on Android

            $
            0
            0
            While the topic of secure voice communication on mobile is hardly new, it has been getting a lot of media attention following the the official release of the Blackphone, Consequently, this is a good time to go back to basics and look into how secure voice communication is typically implemented. While this post focuses on Android, most of the discussion applies to other platforms too, with only the mobile clients presented being Android specific.

            Voice over IP

            Modern mobile networks already encrypt phone calls, so voice communication is secure by default, right? As it turns out, the original GSM encryption protocol (A5/1) is quite weak and can be attacked with readily available hardware and software. The somewhat more modern alternative (A5/3) is also not without flaws, and in addition its adoption has been fairly slow, especially in some parts of the world. Finally, mobile networks depend on a shared key, which while protected by hardware (UICC/SIM card) on mobile phones, can be obtained from MNOs (via legal or other means) and used to enable call interception and decryption.

            So what's the alternative? Short of building your own cellular network, the alternative is to use the data connectivity of the device to transmit and receive voice. This strategy is known as Voice over IP (VoIP) and has been around for a while, but the data speeds offered by mobile networks have only recently reached levels that make it practical on mobiles.

            Session Initiation Protocol

            Different technologies and standards that enable VoIP are available, but by far the most widely adopted one relies on the Session Initiation Protocol (SIP). As the name implies, SIP is a signalling protocol, whose purpose is establish a media session between endpoints. A session is established by discovering the remote endpoint(s), negotiating a media path and codec, and establishing one or more media streams between the endpoints. Media negotiation is achieved with the help of the Session Description Protocol (SDP), and typically transmitted using the Real-time Transport Protocol (RTP). While a SIP client, or more correctly a user agent (UA), can connect directly to a peer, peer discovery usually makes use of one or more well-known registrars. A registrar is a SIP endpoint (server) which accepts REGISTER requests from a set of clients in the domain(s) it is responsible for, and offers a location services to interested parties, much like DNS. Registration is dynamic and temporary: each client registers its SIP URI and IP address with the registrar, thus making it possible for other peers to discover it for the duration of the registration period. The SIP URI can contain arbitrary alphanumeric characters (much like an email address), but the username part is typically limited to numbers for backward compatibility with existing networks and devices (e.g., sip:0123456789@mydomain.org).

            A SIP call is initiated by a UA sending an INVITE message specifying the target peer, which might be mediated by multiple SIP 'servers' (registrars and/or proxies). Once a media path has been negotiated, the two endpoints (Phone A and Phone B in the figure below) might communicate directly (as shown in the figure) or via a one or more media proxies which help bridge SIP clients that don't have a publicly routable IP address (such as those behind NAT), implement conferencing, etc.


            SIP on mobiles

            Because SIP calls are ultimately routed using the registered IP address of the target peer, arguably SIP is not very well suited for mobile clients. In order to receive calls, clients need to remain online even when not actively used and keep a constant IP address for fairly long periods of time. Additionally, because public IP addresses are rarely assigned to mobile clients,  establishing a direct media channel between two mobile peers can be challenging. The online presence problem is typically solved by using a complementary, low-overhead signalling mechanism such as Google Cloud Messaging (GCM) for Android in order to "wake up" the phone before it can receive a call. The requirement for a stable IP address is typically handled by shorter registration times and triggering registration each time the connectivity of the device changes (e.g., from going from LTE to WiFi). The lack of a public IP address is usually overcome by using various supporting methods, ranging from querying STUN servers to discover the external public IP address of a peer, to media proxy servers which bridge connections between heavily NAT-ed clients. By combining these and other techniques, a well-implemented SIP client can offer an alternative voice communication channel on a mobile phone, while integrating with the OS and keeping resource usage fairly low.

            Most Android devices have included a built-in SIP client as part of the framework since version 2.3 in the android.net.sip package. However, the interface offered by this package is very high level, offers few options and does not really support extension or customization. Additionally, it hasn't received any new features since the initial release, and, most importantly, is optional and therefore unavailable on some devices. For this reason, most popular SIP clients for Android are implemented using third party libraries such as PJSIP, which support advanced SIP features and offer a more flexible interface.

            Securing SIP

            As mentioned above, SIP is a signalling protocol. As such, it does not carry any voice data, only information related to setting up media channels. A SIP session includes information about each of the peers and any intermediate servers, including IP addresses, supported codecs, user agent strings, etc. Therefore, even if the media channel is encrypted, and the contents of a voice call cannot be easily recovered, the information contained in the accompanying SIP messages -- who called whom, where the call originated from and when, can be equally important or damaging. Additionally, as we'll show in the next section, SIP can be used to negotiate keys for media channel encryption, in which case intercepting SIP messages can lead to recovering plaintext voice data.

            SIP is a transport-independent text-based protocol, similar to HTTP, which is typically transmitted over UDP. When transmitted over an unencrypted channel, it can easily be intercepted using standard packet capture software or dumped to a log file at any of the intermediate nodes a SIP message traverses before reaching its destination. Multiple tools that can automatically correlate SIP messages with the associated media streams are readily available. This lack of inherent security features requires that SIP be secured by protecting the underlying transport channel.

            VPN

            A straightforward method to secure SIP is to use a VPN to connect peers. Because most VPNs support encryption, signalling, as well as media streams tunneled through the VPN are automatically protected. As an added benefit, using a VPN can solve the NAT problem by offering directly routable private addresses to peers. Using a VPN works well for securing VoIP trunks between SIP servers which are linked using a persistent, low-latency and high-bandwidth connection. However, the overhead of a VPN connection on mobile devices can be too great to sustain a voice channel of even average quality. Additionally, using a VPNs can result in highly variable latency (jitter), which can deteriorate voice quality even if jitter buffers are used. That said, many Android SIP clients can be setup to automatically use a VPN if available. The underlying VPN used can be anything supported on Android, for example the built-in IPSec VPN or a third-party VPN such as OpenVPN. However, even if a VPN provides tolerable voice quality, typically it only ensures an encrypted tunnel to a SIP proxy, and there are no guarantees that any SIP messages or voice streams that leave the proxy are encrypted. That said, a VPN can be a usable solution, if all calls are terminated within a trusted private network (such as a corporate network).

            Secure SIP

            Because SIP is transport-independent it can be transmitted over any supported protocol, including a connection-oriented one such as TCP. When using TCP, a secure channel between SIP peers can be established with the help of the standard TLS protocol. Peer authentication is handled in the usual manner -- using PKI certificates, which allow for mutual authentication. However, because a SIP message typically traverses multiple servers until it reaches its final destination, there is no guarantee that the message will be always encrypted. In other words, SIP-over-TLS, or secure SIP, does not provide end-to-end security but only hop-to-hop security.

            SIP-over-TLS is relatively well supported by all major SIP servers, including open source once like Asterisk and FreeSWITCH. For example, enabling SIP-over-TLS in Asterisk requires generating a key and certificate, configuring a few global tls options, and finally requiring peers to use TLS when connecting to the server as described here. However, Asterisk does not currently support client authentication for SIP clients (although there is some limited support for client authentication on trunk lines).

            Most popular Android clients support using the TLS transport for SIP, with some limitations. For example the popular open source CSipSimple client supports TLS, but only version 1.0 (as well as SSL v2/v3). Additionally, it does not use Android's built-in certificate and key stores, but requires certificates to be saved on external storage in PEM format. Both limitations are due to the underlying PJSIP library, which is built using OpenSSL and requires keys and certificates to be stored as files in OpenSSL's native format. Additionally, server identity is not checked by default and the check needs to be explicitly enabled in order for server identity to be verified, as shown in the screenshot below.


            Another popular VoIP client, Zoiper, doesn't use a pre-initialized trust store at all, but requires peer certificates to be manually confirmed and cached for each SIP server. The commercial Bria Android client (by CounterPath) does use the system trust store and automatically verifies peer identity.

            When a secure SIP connection to a peer is established, VoIP clients indicate this on the call setup and call screens as shown in the CSipSimple screenshot below.


            SIP Alternatives

            While SIP is a widely adopted standard, it is also quite complex and supports many extensions that are not particularly useful in a mobile environment. Instead of SIP the RedPhone secure VoIP client uses a simple custom signalling protocol based on a RESTful HTTP (with some additional verbs) API. The protocol is secured using TLS with server certificates issued by a private CA, which RedPhone clients implicitly trust.  

            Securing the media channel

            As mentioned in our brief SIP introduction,  the media channel between peers is usually implemented using the RTP protocol. Because the media channel is completely separated from SIP, even if all signalling is carried out over TLS, media streams are unprotected by default. RTP streams can be secured using the Secure RTP (SRTP) profile of the RTP protocol. SRTP is designed to provide confidentiality, message authentication, and replay protection to the underlying RTP streams, as well as to the supporting RTCP protocol. SRTP uses a symmetric cipher, typically AES in counter mode, to provide confidentiality and a message authentication code (MAC), typically HMAC-SHA1, to provide packet integrity. Replay protection is implemented by maintaining a replay list which received packets are checked against to detect possible replay.

            When a voice channel is encrypted using SRTP the transmitted data looks like random noise (as any encrypted data should), as shown below.



            SRTP defines a pseudo-random function (PRF) which is used to derive the session keys (used for encryption and authentication) from a master key and master salt. What SRTP does not specify is how the master key and salt should be obtained or exchanged between peers.

            SDES

            SDP Security Descriptions for Media Streams (SDES) is an extension to the SDP protocol which adds a media attribute that can be used to negotiate a key and other cryptographic parameters for SRTP. The attribute is simply called crypto and can contain a crypto suite, key parameters, and, optionally, session parameters. A crypto attribute which includes a crypto suite and key parameters might look like this:

            a=crypto:1 AES_CM_128_HMAC_SHA1_80 inline:VozD8O2kcDFeclWMjBOwvVxN0Bbobh3I6/oxWYye

            Here AES_CM_128_HMAC_SHA1_80 is a crypto suite which uses AES in counter mode with an 128-bit key for encryption and produces an 80-bit SRTP authentication tag using HMAC-SHA1. The Base64-encoded value that follows the crypto suite string contains the master key (128 bits) concatenated with the master salt (112 bits) which are used to derive SRTP session keys.

            SDES does not provide any protection or authentication of the cryptographic parameters it includes, and is therefore only secure when used in combination with SIP-over-TLS (or another secure signalling transport). SDES is widely supported by both SIP servers, hardware SIP phones and software clients. For example, in Asterisk enabling SDES and SRTP is as simple as adding encryption=yes to the peer definition. Most Android SIP clients support SDES and can automatically enable SRTP for the media channel when the INVITE SIP message includes the crypto attribute. For example, in the CSipSimple screenshot above the master key for SRTP was received via SDES.

            The main advantage of SDES is its simplicity. However it requires that all intermediate servers are trusted, because they have access to the SDP data that includes the master key. Even though the SRTP media stream might be transmitted directly between two peers, SRTP effectively provides only hop-to-hop security, because compromising any of the intermediate SIP servers can result in recovering the master key and eventually session keys. For example, if the private key of a SIP server involved in SDES key exchange is compromised, and the TLS session that carried SIP messages session did not use forward secrecy, the master key can easily be extracted from a packet capture using Wireshark, as shown below.


            ZRTP

            ZRTP aims to provide end-to-end security for SRTP media streams by using the media channel to negotiate encryption keys directly between peers. It is essentially a key agreement protocol based on a Diffie-Hellman with added Man-in-the-Middle (MiTM) protections. MiTM protection relies on the so called "short authentication strings" (SAS), which are derived from the session key and are displayed to each calling party. The parties need to confirm that they see the same SAS by reading it to each other over the phone. As an additional MiTM protection, ZRTP uses a form of key continuity, which mixes in previously negotiated key material into the shared secret obtained using Diffie-Hellman when deriving session keys. Thus ZRTP does not require a secure signalling channel or a PKI in order to establish a SRTP session key or protect against MiTM attacks.

            On Android, ZRTP is supported both by VoIP clients for dedicated services such as RedPhone and Silent Phone, and by general-purpose SIP clients like CSipSimple. On the server side, ZRTP is supported by both FreeSWITCH and Kamailio (but not by Asterisk), so it its fairly easy to set up a test server and test ZRTP support on Android.

            ZRTP support in CSipSimple can be configured on a per account basis by setting the  ZRTP mode option to "Create ZRTP". It must be noted however, that ZRTP encryption is opportunistic and will fall back to cleartext communication if the remote peer does not support ZRTP. When the remote party does support ZRTP, CSipSimple shows an SAS confirmation dialog only the first time you connect to a particular peer and then displays the SAS and encryption scheme in the call dialog as shown below.


            In this case, the voice channel is direct and ZRTP/SRTP provide end-to-end security. However, the SIP proxy server can also establish a separate ZRTP/SRTP channel with each party and proxy the media streams. In this case, the intermediate server has access to unencrypted media streams and the provided security is only hop-to-hop, as when using SDES. For example, when FreeSWITCH establishes a separate media channel with two parties that use ZRTP, CSipSimple will display the following dialog, and the SAS values at both clients won't match because each client uses a separate session key. Unfortunately, this is not immediately apparent to end users which may not be familiar with the meaning of the "EndAtMitM" string that signifies this.


            The ZRTP protocol supports a "trusted MiTM" mode which allows clients to verify the intermediate server after completing a key enrollment procedure which establishes a shared key between the client and a particular server. This features is supported by FreeSWITCH, but not by common Android clients, including CSipSimple.

            Summary

            Android supports the SIP protocol natively, but the provided APIs are restrictive and do not support advanced VoIP features such as media channel encryption. Most major SIP client apps support voice encryption using SRTP and either SDES or ZRTP for key negotiation. Popular open source SIP severs such as Asterisk and FreeSWITCH also support SRTP, SDES, and ZRTP and make it fairly easy to build a small scale secure VoIP network that can be used by Android clients. Hopefully, the Android framework will be extended to include the features required to implement secure voice communication without using third party libraries, and integrate any such features with other security services provided by the platform.

            Accessing the embedded secure element in Android 4.x

            $
            0
            0
            After discussing credential storage and Android's disk encryption, we'll now look at another way to protect your secrets: the embedded secure element (SE) found in recent devices. In the first post of this three part series we'll give some background info about the SE and show how to use the SE communication interfaces Android 4.x offers. In the second part we'll try sending some actual commands in order to find out more about the SE execution environment. Finally we will discuss Google Wallet and how it makes use of the SE.

            What is a Secure Element and why do you want one? 

            A Secure Element (SE) is a tamper resistant smart card chip capable of running smart card applications (called applets or cardlets) with a certain level of security and features. A smart card is essentially a minimalistic computing environment on single chip, complete with a CPU, ROM, EEPROM, RAM and I/O port. Recent cards also come equipped with cryptographic co-processors implementing common algorithms such as DES, AES and RSA. Smart cards use various techniques to implement tamper resistance, making it quite hard to extract data by disassembling or analyzing the chip. They come pre-programmed with a  multi-application OS that takes advantage of the hardware's memory protection features to ensure that each application's data is only available to itself. Application installation and (optionally) access is controlled by requiring the use of cryptographic keys for each operation.

            The SE can be integrated in mobile devices in various form factors: UICC (commonly known as a SIM card), embedded in the handset or connected to a SD card slot. If the device supports  NFC the SE is usually connected to the NFC chip, making it possible to communicate with the SE wirelessly. 

            Smart cards have been around for a while and are now used in applications ranging from pre-paid phone calls and transit ticketing to credit cards and VPN credential storage. Since an SE installed in a mobile device has equivalent or superior capabilities to that of a smart card, it can theoretically be used for any application physical smart cards are currently used for. Additionally, since an SE can host multiple applications, it has the potential to replace the bunch of cards people use daily with a single device. Furthermore, because the SE can be controlled by the device's OS, access to it can be restricted by requiring additional authentication (PIN or passphrase) to enable it. 

            So a SE is obviously a very useful thing to have and with a lot of potential, but why would you want to access one from your apps? Aside from the obvious payment applications, which you couldn't realistically build unless you own a bank and have a contract with Visa and friends, there is the possibility of storing other cards you already have (access cards, loyalty cards, etc.) on your phone, but that too is somewhat of a gray area and may requiring contracting the relevant issuing entities. The main application for third party apps would be implementing and running a critical part of the app, such as credential storage or license verification inside the SE to guarantee that it is impervious to reversing and cracking. Other apps that can benefit from being implemented in the SE are One Time Password (OTP) generators and, of course PKI credential (i.e., private keys) storage. While implementing those apps is possible today with standard tools and technologies, using them in practice on current commercial Android devices is not that straightforward. We'll discuss this in detail the second part of the series, but let's first explore the types of SEs available on mobile devices, and the level of support they have in Android. 

            Secure Element form factors in mobile devices

            As mentioned in the previous section, SEs come integrated in different flavours: as an UICC, embedded or as plug-in cards for an SD card slot. This post is obviously about the embedded SE, but let's briefly review the rest as well. 

            Pretty much any mobile device nowadays has an UICC (aka SIM card, although it is technically a SIM only when used on GSM networks) of some form or another. UICCs are actually smart cards that can host applications, and as such are one form of a SE. However, since the UICC is only connected to the basedband processor, which is separate from the application processor that runs the main device OS, they cannot be accessed directly from Android. All communication needs to go through the Radio Interface Layer (RIL) which is essentially a proprietary IPC interface to the baseband. Communication to the UICC SE is carried out using special extended AT commands (AT+CCHO, AT+CCHC, AT+CGLA as defined by 3GPP TS 27.007), which the current Android telephony manager does not support. The SEEK for Android project provides patches that do implement the needed commands, allowing for communicating with the UICC via their standard SmartCard API, which is a reference implementation of the SIMallianceOpen Mobile API specification. However, as most components that talk directly to the hardware in Android, the RIL consists of an open source part (rild), and a proprietary library (libXXX-ril.so). In order to support communication with the UICC secure element, support for this needs to be added to both to rild and to the underlying proprietary library, which is of course up to hardware vendors. The SEEK project does provide a patch that lets the emulator talk directly to a UICC in an external PC/SC reader, but that is only usable for experiments. While there is some talk of integrating this functionality into stock Android (there is even an empty packages/apps/SmartCardService directory in the AOSP tree), there is currently no standard way to communicate with the UICC SE through the RIL (some commercial devices with custom firmware are reported to support it though).

            An alternative way to use the UICC as a SE is using the Single Wire Protocol (SWP) when the UICC is connected to a NFC controller that supports it. This is the case in the Nexus S, as well as the Galaxy Nexus, and while this functionality is supported by the NFC controller drivers, it is disabled by default. This is however a software limitation, and people have managed to patch AOSP source to get around it and successfully communicate with UICC. This has the greatest potential to become part of stock Android, however, as of the current release (4.1.1), it is still not available. 

            Another form factor for an SE is an Advanced Security SD card (ASSD), which is basically an SD card with an embedded SE chip. When connected to an Android device with and SD card slot, running a SEEK-patched Android version, the SE can be accessed via the SmartCard API. However, Android devices with an SD card slot are becoming the exceptions rather than the norm, so it is unlikely that ASSD Android support will make it to the mainstream.

            And finally, there is the embedded SE. As the name implies, an embedded SE is part of the device's mainboard, either as a dedicated chip or integrated with the NFC one, and is not removable. The first Android device to feature an embedded SE was the Nexus S, which also introduced NFC support to Android. Subsequent Nexus-branded devices, as well as other popular handsets have continued this trend. The device we'll use in our experiments, the Galaxy Nexus, is built with NXP's PN65N chip, which bundles a NFC radio controller and an SE (P5CN072, part of NXP's SmartMX series) in a single package (a diagram can be found here).

            NFC and the Secure Element

            NFC and the SE are tightly integrated in Android, and not only because they share the same silicon, so let's say a few words about NFC. NFC has three standard modes of operation: 
            • reader/writer (R/W) mode, allowing for accessing external NFC tags 
            • peer-to-peer (P2P) mode, allowing for data exchange between two NFC devices 
            • card emulation (CE) mode, which allows the device to emulate a traditional contactless smart card 
            What can Android do in each of these modes? The R/W mode allows you to read NDEF tags and  contactless cards, such as some transport cards. While this is, of course, useful, it essential turns your phone into a glorified card reader. P2P mode has been the most demoed and marketed one, in the form of Android Beam. This is only cool the first couple of times though, and since the API only gives you higher-level access to the underlying P2P communication protocol, its applications are currently limited. CE was not available in the initial Gingerbread release, and was introduced later in order to support Google Wallet. This is the NFC mode with the greatest potential for real-life applications. It allows your phone to be programmed to emulate pretty much any physical contactless card, considerably slimming down your physical wallet in the process.

            The embedded SE is connected to the NFC controller through a SignalIn/SignalOut Connection (S2C, standardized as NFC-WI) and has three modes of operation: off, wired and virtual mode. In off mode there is no communication with the SE. In wired mode the SE is visible to the Android OS as if it were a contactless smartcard connected to the RF reader. In virtual mode the SE is visible to external readers as if the phone were a contactless smartcard. These modes are naturally mutually exclusive, so we can communicate with the SE either via the contactless interface (e.g., from an external reader), or through the wired interface (e.g., from an Android app). This post will focus on using the wired mode to communicate with the SE from an app. Communicating via NFC is no different than reading a physical contactless card and we'll touch on it briefly in the last post of the series.

            Accessing the embedded Secure Element

            This is a lot of (useful?) information, but we still haven't answered the main question of this entry: how can we access the embedded SE? The bad news is that there is no public Android SDK API for this (yet). The good news is that accessing it in a standard and (somewhat) officially supported way is possible in current Android versions.

            Card emulation, and consequently, internal APIs for accessing the embedded SE were introduced in Android 2.3.4, and that is the version Google Wallet launched on. Those APIs were, and remain, hidden from SDK applications. Additionally using them required system-level permissions (WRITE_SECURE_SETTINGS or NFCEE_ADMIN) in 2.3.4 and subsequent Gingerbread releases, as well as in the initial Ice Cream Sandwich release (4.0, API Level 14). What this means is that only Google (for Nexus) devices, and mobile vendors (for everything else) could distribute apps that use the SE, because they need to either be part of the core OS, or be signed with the platform keys, controlled by the respective vendor. Since the only app that made use of the SE was Google Wallet, which ran only on Nexus S (and initially on a single carrier), this was good enough. However, it made it impossible to develop and distribute an SE app without having it signed by the platform vendor. Android 4.0.4 (API Level 15) changed that by replacing the system-level permission requirement with signing certificate (aka, 'signature' in Android framework terms) whitelisting at the OS level. While this still requires modifying core OS files, and thus vendor cooperation, there is no need to sign SE applications with the vendor key, which greatly simplifies distribution. Additionally, since the whiltelist is maintained in a file, it can easily be updated using an OTA to add support for more SE applications.

            In practice this is implemented by the NfceeAccessControl class and enforced by the system NfcService. NfceeAccessControl reads the whilelist from /etc/nfcee_access.xml which is an XML file that stores a list of signing certificates and package names that are allowed to access the SE. Access can be granted both to all apps signed by a particular certificate's private key (if no package is specified), or to a single package (app) only. Here's how the file looks like:

            <?xml version="1.0" encoding="utf-8"?>
            <resources xmlns:xliff="urn:oasis:names:tc:xliff:document:1.2">
            <signer android:signature="30820...90">
            <package android:name="org.foo.nfc.app">
            </package></signer>
            </resources>

            This would allow SE access to the 'org.foo.nfc.app' package, if it is signed by the specified signer. So the first step to getting our app to access the SE is adding its signing certificate and package name to the nfcee_access.xml file. This file resides on the system partition (/etc is symlinked to /system/etc), so we need root access in order to remount it read-write and modify the file. The stock file already has the Google Wallet certificate in it, so it is a good idea to start with that and add our own package, otherwise Google Wallet SE access would be disabled. The 'signature' attribute is a hex encoding of the signing certificate in DER format, which is a pity since that results in an excessively long string (a hash of the certificate would have sufficed) . We can either add a <debug/> element to the file, install it, try to access the SE and get the string we need to add from the access denied exception, or simplify the process a bit by preparing the string in advance. We can get the certificate bytes in hex format with a command like this:

            $ keytool -exportcert -v -keystore my.keystore -alias my_signing_key \
            -storepass password|xxd -p -|tr -d '\n'

            This will print the hex string on a single line, so you might want to redirect it to a file for easier copying. Add a new <signer> element to the stock file, add your app's package name and the certificate hex string, and replace the original file in /etc/ (backups are always a good idea). You will also need to reboot the device for the changes to take effect, since file is only read when the NfcService starts.

            As we said, there are no special permissions required to access the SE in ICS (4.0.3 and above) and Jelly Bean (4.1), so we only need to add the standard NFC permission to our app's manifest. However, the library that implements SE access is marked as optional, and to get it loaded for our app, we need to mark it as required in the manifest with the <uses-library> tag. The AndroidManifest.xml for the app should look something like this:

            <manifest xmlns:android="http://schemas.android.com/apk/res/android"
            package="org.foo.nfc.app"
            android:versionCode="1"
            android:versionName="1.0">
            <uses-sdk
            android:minSdkVersion="15"
            android:targetSdkVersion="16" />

            <uses-permission android:name="android.permission.NFC" />

            <application
            android:icon="@drawable/ic_launcher"
            android:label="@string/app_name"
            android:theme="@style/AppTheme">
            <activity
            android:name=".MainActivity"
            android:label="@string/title_activity_main">
            <intent-filter>
            <action android:name="android.intent.action.MAIN" />
            <category android:name="android.intent.category.LAUNCHER" />
            </intent-filter>
            </activity>

            <uses-library
            android:name="com.android.nfc_extras"
            android:required="true" />
            </application>
            </manifest>

            With the boilerplate out of the way it is finally time to actually access the SE API. Android doesn't currently implement a standard smart card communication API such as JSR 177 or the Open Mobile API, but instead offers a very basic communication interface in the NfcExecutionEnvironment (NFC-EE) class. It has only three public methods:

            public class NfcExecutionEnvironment {
            public void open() throws IOException {...}

            public void close() throws IOException {...}

            public byte[] transceive(byte[] in) throws IOException {...}
            }

            This simple interface is sufficient to communicate with the SE, so now we just need to get access to an instance. This is available via a static method of the NfcAdapterExtras class which controls both card emulation route (currently only to the SE, since UICC support is not available) and NFC-EE management. So the full code to send a command to the SE becomes:

            NfcAdapterExtras adapterExtras = NfcAdapterExtras.get(NfcAdapter.getDefaultAdapter(context));
            NfcExecutionEnvironment nfceEe = adapterExtras.getEmbeddedExecutionEnvironment();
            nfcEe.open();
            byte[] response = nfcEe.transceive(command);
            nfcEe.close();

            As we mentioned earlier however, com.android.nfc_extras is an optional package and thus not part of the SDK. We can't import it directly, so we have to either build our app as part of the full Android source (by placing it in /packages/apps/), or resort to reflection. Since the SE interface is quite small, we opt for ease of building and testing, and will use reflection. The code to get, open and use an NFC-EE instance now degenerates to something like this:

            Class nfcExtrasClazz = Class.forName("com.android.nfc_extras.NfcAdapterExtras");
            Method getMethod = nfcExtrasClazz .getMethod("get", Class.forName("android.nfc.NfcAdapter"));
            NfcAdapter adapter = NfcAdapter.getDefaultAdapter(context);
            Object nfcExtras = getMethod .invoke(nfcExtrasClazz, adapter);

            Method getEEMethod = nfcExtras.getClass().getMethod("getEmbeddedExecutionEnvironment",
            (Class[]) null);
            Object ee = getEEMethod.invoke(nfcExtras , (Object[]) null);
            Class eeClazz = se.getClass();
            Method openMethod = eeClazz.getMethod("open", (Class[]) null);
            Method transceiveMethod = ee.getClass().getMethod("transceive",
            new Class[] { byte[].class });
            Method closeMethod = eeClazz.getMethod("close", (Class[]) null);

            openMethod.invoke(se, (Object[]) null);
            Object response = transceiveMethod.invoke(se, command);
            closeMethod.invoke(se, (Object[]) null);

            We can of course wrap this up in a prettier package, and we will in the second part of the series. What is important to remember is to call close() when done, because wired access to the SE blocks contactless access while the NFC-EE is open. We should now have a working connection to the embedded SE and sending some bytes should produce a (error) response. Here's a first try:

            D/SEConnection(27318): --> 00000000
            D/SEConnection(27318): <-- 6E00


            We'll explain what the response means and show how to send some actually meaningful commands in the second part of the article.

            Summary

            A secure element is a tamper resistant execution environment on a chip that can execute applications and store data in a secure manner. An SE is found on the UICC of every Android phone, but the platform currently doesn't allow access to it. Recent devices come with NFC support, which is often combined with an embedded secure element chip, usually in the same package. The embedded secure element can be accessed both externally via a NFC reader/writer (virtual mode) or internally via the NfcExecutionEnvironment API (wired mode). Access to the API is currently controlled by a system level whitelist of signing certificates and package names. Once an application is whitelisted, it can communicate with the SE without any other special permissions or restrictions.

            Revisiting Android disk encryption

            $
            0
            0
            In iOS 8, Apple has expanded the scope of data encryption and now mixes in the user's passcode with an unextractable hardware UID when deriving an encryption key, making it harder to extract data from iOS 8 devices. This has been somewhat of a hot topic lately, with opinions ranging from praise for Apple's new focus on serious security, to demands for "golden keys" to mobile devices to be magically conjured up. Naturally, the debate has spread to other OS's, and Google has announced that the upcoming Android L release will also have disk encryption enabled by default. Consequently, questions and speculation about the usefulness and strength of Android's disk encryption have sprung up on multiple forums, so this seems like a good time to take another look at its implementation. While Android L still hasn't been released yet, some of the improvements to disk encryption it introduces are apparent in the preview release, so this post will briefly introduce them as well.

            This post will focus on the security level of disk encryption, for more details on its integration with the platform, see Chapter 10 of my book -- 'Android Security Internals' (early access full PDF is available now, print books should ship by end of October).

            Android 3.0-4.3

            Full disk encryption (FDE) for Android was introduced in version 3.0 (Honeycomb) and didn't change much until version 4.4 (discussed in the next section). Android's FDE uses the dm-crypt target of Linux's device mapper framework to implement transparent disk encryption for the userdata (mounted as /data) partition. Once encryption is enabled, all writes to disk automatically encrypt data before committing it to disk and all reads automatically decrypt data before returning it to the calling process. The disk encryption key (128-bit, called the 'master key') is randomly generated and protected by the lockscreen password. Individual disk sectors are encrypted by the master key using AES in CBC mode, with ESSIV:SHA256 to derive sector IVs.

            Android uses a so called 'crypto footer' structure to store encryption parameters. It is very similar to the encrypted partition header used by LUKS (Linux Unified Key Setup), but is simpler and omits several LUKS features. While LUKS supports multiple key slots, allowing for decryption using multiple passphrases, Android's crypto footer only stores a single copy of the encrypted master key and thus supports a single decryption passphrase. Additionally, while LUKS splits the encrypted key in multiple 'stripes' in order to reduce the probability of recovering the full key after it has been deleted from disk, Android has no such feature. Finally, LUKS includes a master key checksum (derived by running the master key through PBKDF2), which allows to check whether the entered passphrase is correct without decrypting any of the disk data. Android's crypto footer doesn't include a master key checksum, so the only way to check whether the entered passphrase is correct is to try and mount the encrypted partition. If the mount succeeds, the passphrase is considered correct.

            Here's how the crypto footer looks in Android 4.3 (version 1.0).

            struct crypt_mnt_ftr {
            __le32 magic;
            __le16 major_version;
            __le16 minor_version;
            __le32 ftr_size;
            __le32 flags;
            __le32 keysize;
            __le32 spare1;
            __le64 fs_size;
            __le32 failed_decrypt_count;
            unsigned char crypto_type_name[MAX_CRYPTO_TYPE_NAME_LEN];
            };

            The structure includes the version of the FDE scheme, the key size, some flags and the name of the actual disk encryption cipher mode (aes-cbc-essiv:sha256). The crypto footer is immediately followed by the encrypted key and a 16-bit random salt value. In this initial version, a lot of the parameters are implicit and are therefore not included in the crypto footer. The master key is encrypted using an 128-bit AES key (key encryption key, or KEK) derived from an user-supplied passphrase using 2000 iteration of PBKDF2. The derivation process also generates an IV, which is used to encrypt the master key in CBC mode. When an encrypted device is booted, Android takes the passphrase the user has entered, runs it through PBKDF2, decrypts the encrypted master key and passes it to dm-crypt in order to mount the encrypted userdata partition.

            Bruteforcing FDE 1.0

            The encryption scheme described in the previous section is considered relatively secure, but because it is implemented entirely in software, it's security depends entirely on the complexity of the disk encryption passphrase. If it is sufficiently long and complex, bruteforcing the encrypted master key could take years. However, because Android has chosen to reuse the losckreen PIN or password (maximum length 16 characters), in practice most people are likely to end up with a relatively short or low-entropy disk encryption password. While the PBKDF2 key derivation algorithm has been designed to work with low-entropy input, and requires considerable computational effort to bruteforce, 2000 iterations are not a significant hurdle even to current off-the-shelf hardware. Let's see how hard it is to bruteforce Android FDE 1.0 in practice.

            Bruteforcing on the device is obviously impractical due to the limited processing resources of Android devices and the built-in rate limiting after several unsuccessful attempts. A much more practical approach is to obtain a copy of the crypto footer and the encrypted userdata partition and try to guess the passphrase offline, using much more powerful hardware. Obtaining a raw copy of a disk partition is usually not possible on most commercial devices, but can be achieved by booting a specialized data acquisition boot image signed by the device manufacturer,  exploiting a flaw in the bootloader that allows unsigned images to be booted (such as this one), or simply by booting a custom recovery image on devices with an unlocked bootloader (a typical first step to 'rooting').

            Once the device has been booted, obtaining a copy of the userdata partition is straightforward. The crypto footer however, despite its name, typically resides on a dedicated partition on recent devices. The name of the partition is specified using the encryptable flag in the device's fstab file. For example, on the Galaxy Nexus, the footer is on the metadata partition as shown below.

            /dev/block/platform/omap/omap_hsmmc.0/by-name/userdata  /data  ext4  \
            noatime,nosuid,nodev,nomblk_io_submit,errors=panic \
            wait,check,encryptable=/dev/block/platform/omap/omap_hsmmc.0/by-name/metadata

            Once we know the name of the partition that stores the crypto footer it can be copied simply by using the dd command.

            Very short passcodes (for example a 4-digit PIN) can be successfully bruteforced using a script (this particular one is included in Santoku Linux) that runs on a desktop CPU. However, much better performance can be achieved on a GPU, which has been specifically designed to execute multiple tasks in parallel. PBKDF2 is an iterative algorithm based on SHA-1 (SHA-2 can also be used) that requires very little memory for execution and lends itself to paralellization. One GPU-based, high-performance PBKDF2 implementation is found in the popular password recovery tool hashcat. Version 1.30 comes with a built-in Android FDE module, so recovering an Android disk encryption key is as simple as parsing the crypto footer and feeding the encrypted key, salt, and the first several sectors of the encrypted partition to hashcat. As we noted in the previous section, the crypto footer does not include any checksum of the master key, so the only way to check whether the decrypted master key is the correct one is to try to decrypt the disk partition and look for some known data. Because most current Android devices use the ext4 filesystem, hashcat (and other similar tools) look for patterns in the ext4 superblock in order to confirm whether the tried passphrase is correct.

            The Android FDE input for hashcat includes the salt, encrypted master key and the first 3 sectors of the encrypted partition (which contain a copy of the 1024-byte ext4 superblock). The hashcat input file might look like this (taken from the hashcat example hash):

            $fde$16$ca56e82e7b5a9c2fc1e3b5a7d671c2f9$16$7c124af19ac913be0fc137b75a34b20d$eac806ae7277c8d4...

            On a device that uses a six-digit lockscreen PIN, the PIN, and consequently the FDE master key can be recovered with the following command:

            $ cudaHashcat64 -m 8800 -a 3 android43fde.txt ?d?d?d?d?d?d
            ...
            Session.Name...: cudaHashcat
            Status.........: Cracked
            Input.Mode.....: Mask (?d?d?d?d?d?d) [6]
            Hash.Target....: $fde$16$aca5f840...
            Hash.Type......: Android FDE
            Time.Started...: Sun Oct 05 19:06:23 2014 (6 secs)
            Speed.GPU.#1...: 20629 H/s
            Recovered......: 1/1 (100.00%) Digests, 1/1 (100.00%) Salts
            Progress.......: 122880/1000000 (12.29%)
            Skipped........: 0/122880 (0.00%)
            Rejected.......: 0/122880 (0.00%)
            HWMon.GPU.#1...: 0% Util, 48c Temp, N/A Fan

            Started: Sun Oct 05 19:06:23 2014
            Stopped: Sun Oct 05 19:06:33 2014

            Even when run on the GPU of a mobile computer (NVIDIA GeForce 730M), hashcat can achieve more then 20,000 PBKDF2 hashes per second, and recovering a 6 digit PIN takes less than 10 seconds. On the same hardware, a 6-letter (lowercase only) password takes about 4 hours.

            As you can see, bruteforcing a simple PIN or password is very much feasible, so choosing a strong lockscreen password is vital. Lockscreen password strength can be enforced by installing a device administrator that sets password complexity requirements. Alternatively, a dedicated disk encryption password can be set on rooted devices using the shell or a dedicated application. CyanogenMod 11 supports setting a dedicated disk encryption password out of the box, and one can be set via system Settings, as shown below.

            Android 4.4

            Android 4.4 adds several improvements to disk encryption, but the most important one is replacing the PBKDF2 key derivation function (KDF) with scrypt. scrypt has been specifically designed to be hard to crack on GPUs by requiring a large (and configurable) amount of memory. Because GPUs have a limited amount of memory, executing multiple scrypt tasks in parallel is no longer feasible, and thus cracking scrypt is much slower than PBKDF2 (or similar hash-based KDFs). As part of the upgrade process to 4.4, Android automatically updates the crypto footer to use scrypt and re-encrypts the master key. Thus every device running Android 4.4 (devices using a vendor-proprietary FDE scheme excluded) should have its FDE master key protected using an scrypt-derived key.

            The Android 4.4 crypto footer looks like this (version 1.2):

            struct crypt_mnt_ftr {
            __le32 magic;
            __le16 major_version;
            __le16 minor_version;
            __le32 ftr_size;
            __le32 flags;
            __le32 keysize;
            __le32 spare1;
            __le64 fs_size;
            __le32 failed_decrypt_count;
            unsigned char crypto_type_name[MAX_CRYPTO_TYPE_NAME_LEN];
            __le32 spare2;
            unsigned char master_key[MAX_KEY_LEN];
            unsigned char salt[SALT_LEN];
            __le64 persist_data_offset[2];
            __le32 persist_data_size;
            __le8 kdf_type;
            /* scrypt parameters. See www.tarsnap.com/scrypt/scrypt.pdf */
            __le8 N_factor; /* (1 << N) */
            __le8 r_factor; /* (1 << r) */
            __le8 p_factor; /* (1 << p) */
            };

            As you can see, the footer now includes an explicit kdf_type which specifies the KDF used to derive the master key KEK. The values of the scrypt initialization parameters (N, r and p) are also included. The master key size (128-bit) and disk sector encryption mode (aes-cbc-essiv:sha256) are the same as in 4.3.

            Bruteforcing the master key now requires parsing the crypto footer, initializing scrypt and generating all target PIN or password combinations. As the 1.2 crypto footer still does not include a master key checksum, checking whether the tried PIN or password is correct again requires looking for known plaintext in the ext4 superblock.

            While hashcat does support scrypt since version 1.30, it is not much more efficient (and in fact can be slower) than running scrypt on a CPU. Additionally, the Android 4.4 crypto footer format is not supported, so hashcat cannot be used to recover Android 4.4 disk encryption passphrases as is.

            Instead, the Santoku Linux FDE bruteforcer Python script can be extended to support the 1.2 crypto footer format and the scrypt KDF. A sample (and not particularly efficient) implementation can be found here. It might produce the following output when run on a 3.50GHz Intel Core i7 CPU:

            $ time python bruteforce_stdcrypto.py header footer 4

            Android FDE crypto footer
            -------------------------
            Magic : 0xD0B5B1C4
            Major Version : 1
            Minor Version : 2
            Footer Size : 192 bytes
            Flags : 0x00000000
            Key Size : 128 bits
            Failed Decrypts: 0
            Crypto Type : aes-cbc-essiv:sha256
            Encrypted Key : 0x66C446E04854202F9F43D69878929C4A
            Salt : 0x3AB4FA74A1D6E87FAFFB74D4BC2D4013
            KDF : scrypt
            N_factor : 15 (N=32768)
            r_factor : 3 (r=8)
            p_factor : 1 (p=2)
            -------------------------
            Trying to Bruteforce Password... please wait
            Trying: 0000
            Trying: 0001
            Trying: 0002
            Trying: 0003
            ...
            Trying: 1230
            Trying: 1231
            Trying: 1232
            Trying: 1233
            Trying: 1234
            Found PIN!: 1234

            real 4m43.985s
            user 4m34.156s
            sys 0m9.759s

            As you can see, trying 1200 PIN combinations requires almost 5 minutes, so recovering a simple PIN is no longer instantaneous. That said, cracking a short PIN or password is still very much feasible, so choosing a strong locksreen password (or a dedicated disk encryption password, when possible) is still very important.

            Android L

            A preview release of the upcoming Android version (referred to as 'L') has been available for several months now, so we can observe some of expected changes to disk encryption. If we run the crypto footer obtained from an encrypted Android L device through the script introduced in the previous section, we may get the following output:

            $ ./bruteforce_stdcrypto.py header L_footer 4

            Android FDE crypto footer
            -------------------------
            Magic : 0xD0B5B1C4
            Major Version : 1
            Minor Version : 3
            Footer Size : 2288 bytes
            Flags : 0x00000000
            Key Size : 128 bits
            Failed Decrypts: 0
            Crypto Type : aes-cbc-essiv:sha256
            Encrypted Key : 0x825F3F10675C6F8B7A6F425599D9ECD7
            Salt : 0x0B9C7E8EA34417ED7425C3A3CFD2E928
            KDF : unknown (3)
            N_factor : 15 (N=32768)
            r_factor : 3 (r=8)
            p_factor : 1 (p=2)
            -------------------------
            ...

            As you can see above, the crypto footer version has been upped to 1.3, but the disk encryption cipher mode and key size have not changed. However, version 1.3 uses a new, unknown KDF specified with the constant 3 (1 is PBKDF2, 2 is scrypt). Additionally, encrypting a device no longer requires setting a lockscreen PIN or password, which suggests that the master key KEK is no longer directly derived from the lockscreen password. Starting the encryption process produces the following logcat output:

            D/QSEECOMAPI: (  178): QSEECom_start_app sb_length = 0x2000
            D/QSEECOMAPI: ( 178): App is already loaded QSEE and app id = 1
            D/QSEECOMAPI: ( 178): QSEECom_shutdown_app
            D/QSEECOMAPI: ( 178): QSEECom_shutdown_app, app_id = 1
            ...
            I/Cryptfs ( 178): Using scrypt with keymaster for cryptfs KDF
            D/QSEECOMAPI: ( 178): QSEECom_start_app sb_length = 0x2000
            D/QSEECOMAPI: ( 178): App is already loaded QSEE and app id = 1
            D/QSEECOMAPI: ( 178): QSEECom_shutdown_app
            D/QSEECOMAPI: ( 178): QSEECom_shutdown_app, app_id = 1

            As discussed in a previous post, 'QSEE' stands for Qualcomm Secure Execution Environment, which is an ARM TrustZone-based implementation of a TEE. QSEE provides the hardware-backed credential store on most devices that use recent Qualcomm SoCs. From the log above, it appears that Android's keymaster HAL module has been extended to store the disk encryption key KEK in hardware-backed storage (Cf. 'Using scrypt with keymaster for cryptfs KDF' in the log above). The log also mentions scrypt, so it is possible that the lockscreen password (if present) along with some key (or seed) stored in the TEE are fed to the KDF to produce the final master key KEK. However, since no source code is currently available, we cannot confirm this. That said, setting an unlock pattern on an encrypted Android L device produces the following output, which suggests that the pattern is indeed used when generating the encryption key:

            D/VoldCmdListener(  173): cryptfs changepw pattern {}
            D/QSEECOMAPI: ( 173): QSEECom_start_app sb_length = 0x2000
            D/QSEECOMAPI: ( 173): App is already loaded QSEE and app id = 1
            ...
            D/QSEECOMAPI: ( 173): QSEECom_shutdown_app
            D/QSEECOMAPI: ( 173): QSEECom_shutdown_app, app_id = 1
            I/Cryptfs ( 173): Using scrypt with keymaster for cryptfs KDF
            D/QSEECOMAPI: ( 173): QSEECom_start_app sb_length = 0x2000
            D/QSEECOMAPI: ( 173): App is already loaded QSEE and app id = 1
            D/QSEECOMAPI: ( 173): QSEECom_shutdown_app
            D/QSEECOMAPI: ( 173): QSEECom_shutdown_app, app_id = 1
            E/VoldConnector( 756): NDC Command {5 cryptfs changepw pattern [scrubbed]} took too long (6210ms)

            As you can be see in the listing above, the cryptfs changepw command, which is used to send instructions to Android's vold daemon, has been extended to support a pattern, in addition to the previously supported PIN/password. Additionally, the amount of time the password change takes (6 seconds) suggests that the KDF (scrypt) is indeed being executed to generate a new encryption key. Once we've set a lockscreen unlock pattern, booting the device now requires entering the pattern, as can be seen in the screenshot below. Another subtle change introduced in Android L, is that when booting an encrypted device the lockscreen pattern, PIN or password needs to be entered only once (at boot time), and not twice (once more on the lockscreen, after Android boots) as it was in previous versions.


            While no definitive details are available, it is fairly certain that (at least on high-end devices), Android's disk encryption key(s) will have some hardware protection in Android L. Assuming that the implementation is similar to that of the hardware-backed credential store, disk encryption keys should be encrypted by an unextractable key encryption key stored in the SoC, so obtaining a copy of the crypto footer and the encrypted userdata partition, and bruteforcing the lockscreen passphrase should no longer be sufficient to decrypt disk contents. Disk encryption in the Android L preview (at least on a Nexus 7 2013) feels significantly faster (encrypting the 16GB data partition takes about 10 minutes), so it is most probably hardware-accelerated as well (or the initial encryption is only encrypting disk blocks that are actually in use, and not every single block as in previous versions). However, it remains to be seen whether high-end Android L devices will include a dedicated crypto co-processor akin to Apple's 'Secure Enclave'. While the current TrustZone-based key protection is much better than the software only implementation found in previous versions, a flaw in the secure TEE OS or any of the trusted TEE applications could lead to extracting hardware-protected keys or otherwise compromising the integrity of the system.

            Update 2014/11/4: The official documentation about disk encryption has been updated, including details about KEK protection. Quote:
            The encrypted key is stored in the crypto metadata. Hardware backing is implemented by using Trusted Execution Environment’s (TEE) signing capability. Previously, we encrypted the master key with a key generated by applying scrypt to the user's password and the stored salt. In order to make the key resilient against off-box attacks, we extend this algorithm by signing the resultant key with a stored TEE key. The resultant signature is then turned into an appropriate length key by one more application of scrypt. This key is then used to encrypt and decrypt the master key. To store this key:
            1. Generate random 16-byte disk encryption key (DEK) and 16-byte salt.
            2. Apply scrypt to the user password and the salt to produce 32-byte intermediate key 1 (IK1).
            3. Pad IK1 with zero bytes to the size of the hardware-bound private key (HBK). Specifically, we pad as: 00 || IK1 || 00..00; one zero byte, 32 IK1 bytes, 223 zero bytes.
            4. Sign padded IK1 with HBK to produce 256-byte IK2.
            5. Apply scrypt to IK2 and salt (same salt as step 2) to produce 32-byte IK3.
            6. Use the first 16 bytes of IK3 as KEK and the last 16 bytes as IV.
            7. Encrypt DEK with AES_CBC, with key KEK, and initialization vector IV.

                  Summary

                  Android has included full disk encryption (FDE) support since version 3.0, but versions prior to 4.4 used a fairly easy to bruteforce key derivation function (PBKDF2 with 2000 iterations). Additionally, because the disk encryption password is the same as the lockscreen one, most users tend to use simple PINs or passwords (unless a device administrator enforces password complexity rules), which further facilitates bruteforcing. Android 4.4 replaced the disk encryption KDF with scrypt, which is much harder to crack and cannot be implemented efficiently on off-the-shelf GPU hardware. In addition to enabling FDE out of the box, Android L is expected to include hardware protection for disk encryption keys, as well as  hardware acceleration for encrypted disk access. These two features should make FDE on Android both more secure and much faster.

                  [Note that the discussion in this post is based on "stock Android" as released by Google (references source code is from AOSP). Some device vendors implement slightly different encryption schemes, and hardware-backed key storage and/or hardware acceleration are already available via vendor extensions on some high-end devices.]

                  Android Security Internals is out

                  $
                  0
                  0
                  Some six months after the first early access chapters were announced, my book has now officially been released. While the final ebook PDF has been available for a few weeks, you can now get all ebook formats (PDF, Mobi and ePub) directly from the publisher, No Starch Press. Print books are also ready and should start shipping tomorrow (Oct 24th). You can use the code UNDERTHEHOOD when checking out for a 30% discount in the next few days. The book will also be available from O'ReillyAmazon and other retailers in the coming weeks.

                  This book would not have been possible without the efforts of Bill Pollock and Alison Law from No Starch, who edited, refined and produced my raw writings. +Kenny Root  reviewed all chapters and caught some embarrassing mistakes, all that are left are mine alone. Jorrit “Chainfire” Jongma reviewed my coverage of SuperSU and Jon “jcase” Sawyer contributed the foreword. Once again, a big thanks to everyone involved!

                  About the book

                  The book's purpose and structure have not changed considerably since it was first announced. It walks you through Android's security architecture, starting from the bottom up. It starts with fundamental concepts such as Binder, permissions and code signing, and goes on to describe more specific topics such as cryptographic providers, account management and device administration. The book includes excerpts from core native daemons and platform services, as well as some application-level code samples, so some familiarity with Linux and Android programming is assumed (but not absolutely required). 

                  Android versions covered

                  The book covers Android 4.4, based on the source code publicly released through AOSP. Android's master branch is also referenced a few times, because master changes are usually a good indicator of the direction future releases will take. Vendor modifications or extensions to Android, as well as  device-specific features are not discussed.

                  The first developer preview of Android 5.0 (Lollipop, then known only as 'Android L') was announced shortly after the first draft of this book was finished. This first preview L release included some new security features, such as improvements to full-disk encryption and device administration, but not all planned features were available (for example, Smart Lock was missing). The final Lollipop developer preview (released last week) added those missing features and finalized the public API. The source code for Lollipop is however not yet available, and trying to write an 'internals' book without it would either result in incomplete or speculative coverage, or would turn into an (rather though) exercise in reverse engineering. That is why I've chosen not to cover Android 5.0 in the book at all and it is exclusively focused on Android 4.4 (KitKat).

                  Lollipop is a major release, and as such would require reworking most of the chapters and, of course, adding a lot of new content. This could happen in an updated version of the book at some point. Not to worry though, some of the more interesting new security features will probably get covered right here, on the blog,  first.

                  With that out of the way, here is the extended table of contents. You can find the full table of contents on the book's official page.

                  Update: Chapter 1 is now also freely available on No Starch's site.

                  Table of contents

                   Chapter 1: Android’s Security Model
                  • Android’s Architecture
                  • Android’s Security Model
                  Chapter 2: Permissions
                  • The Nature of Permissions
                  • Requesting Permissions
                  • Permission Management
                  • Permission Protection Levels
                  • Permission Assignment
                  • Permission Enforcement
                  • System Permissions
                  • Shared User ID
                  • Custom Permissions
                  • Public and Private Components
                  • Activity and Service Permissions
                  • Broadcast Permissions
                  • Content Provider Permissions
                  • Pending Intents
                  Chapter 3: Package Management
                  • Android Application Package Format
                  • Code signing
                  • APK Install Process
                  • Package Verification
                  Chapter 4: User Management
                  • Multi-User Support Overview
                  • Types of Users
                  • User Management
                  • User Metadata
                  • Per-User Application Management
                  • External Storage
                  • Other Multi-User Features
                  Chapter 5: Cryptographic Providers
                  • JCA Provider Architecture
                  • JCA Engine Classes
                  • Android JCA Providers
                  • Using a Custom Provider
                  Chapter 6: Network Security and PKI
                  • PKI and SSL Overview
                  • JSSE Introduction
                  • Android JSSE Implementation
                  Chapter 7: Credential Storage
                  • VPN and Wi-Fi EAP Credentials
                  • Credential Storage Implementation
                  • Public APIs
                  Chapter 8: Online Account Management
                  • Android Account Management Overview
                  • Account Management Implementation
                  • Google Accounts Support
                  Chapter 9: Enterprise Security
                  • Device Administration
                  • VPN Support
                  • Wi-Fi EAP
                  Chapter 10: Device Security
                  • Controlling OS Boot-Up and Installation
                  • Verified Boot
                  • Disk Encryption
                  • Screen Security
                  • Secure USB Debugging
                  • Android Backup
                  Chapter 11: NFC and Secure Elements
                  • NFC Overview
                  • Android NFC Support
                  • Secure Elements
                  • Software Card Emulation
                  Chapter 12: SElinux
                  • SELinux Introduction
                  • Android Implementation
                  • Android 4.4 SELinux Policy
                  Chapter 13: System Updates and Root Access
                  • Bootloader
                  • Recovery
                  • Root Access
                  • Root Access on Production Builds

                  Dissecting Lollipop's Smart Lock

                  $
                  0
                  0
                  Android 5.0 (Lollipop) has been out for a while now, and most of its new features have been introduced, benchmarked, or complained about extensively. The new release also includes a number of of security enhancements, of which disk encryption has gotten probably the most media attention. Smart Lock (originally announced at Google I/O 2014), which allows bypassing the device lockscreen when certain environmental conditions are met, is probably the most user-visible new security feature. As such, it has also been discussed and blogged about extensively. However, because Smart Lock is a proprietary feature incorporated in Google Play Services, not many details about its implementation or security level are available. This post will look into the Android framework extensions that Smart Lock is build upon, show how to use them to create your own unlock method, and finally briefly discuss its Play Services implementation.

                  Trust agents

                  Smart Lock is build upon a new Lollipop feature called trust agents. To quote from the framework documentation, a trust agent is a 'service that notifies the system about whether it believes the environment of the device to be trusted.' The exact meaning of 'trusted' is up to the trust agent to define. When a trust agent believes it can trust the current environment, it notifies the system via a callback, and the system decides how to relax the security configuration of the device.  In the current Android incarnation, being in a trusted environment grants the user the ability to bypass the lockscreen.

                  Trust is granted per user, so each user's trust agents can be configured differently. Additionally, trust can be granted for a certain period of time, and the system automatically reverts to an untrusted state when that period expires. Device administrators can set the maximum trust period trust agents are allowed to set, or disable trust agents altogether. 

                  Trust agent API

                  Trust agents are Android services which extend the TrustAgentService base class (not available in the public SDK). The base class provides methods for enabling the trust agent (setManagingTrust()), granting and revoking trust (grant/revokeTrust()), as well as a number of callback methods, as shown below.

                  public class TrustAgentService extends Service {

                  public void onUnlockAttempt(boolean successful) {
                  }

                  public void onTrustTimeout() {
                  }

                  private void onError(String msg) {
                  Slog.v(TAG, "Remote exception while " + msg);
                  }

                  public boolean onSetTrustAgentFeaturesEnabled(Bundle options) {
                  return false;
                  }

                  public final void grantTrust(
                  final CharSequence message,
                  final long durationMs, final boolean initiatedByUser) {
                  //...
                  }

                  public final void revokeTrust() {
                  //...
                  }

                  public final void setManagingTrust(boolean managingTrust) {
                  //...
                  }

                  @Override
                  public final IBinder onBind(Intent intent) {
                  return new TrustAgentServiceWrapper();
                  }


                  //...
                  }

                  To be picked up by the system, a trust agent needs to be declared in AndroidManifest.xml with an intent filter for the android.service.trust.TrustAgentService action and require the BIND_TRUST_AGENT permission, as shown below. This ensures that only the system can bind to the trust agent, as the BIND_TRUST_AGENT permission requires the platform signature. A Binder API, which allows calling the agent from other processes, is provided by the TrustAgentService base class. 

                  <manifest ... >

                  <uses-permission android:name="android.permission.CONTROL_KEYGUARD" />
                  <uses-permission android:name="android.permission.PROVIDE_TRUST_AGENT" />

                  <application ...>
                  <service android:exported="true"
                  android:label="@string/app_name"
                  android:name=".GhettoTrustAgent"
                  android:permission="android.permission.BIND_TRUST_AGENT">
                  <intent-filter>
                  <action android:name="android.service.trust.TrustAgentService"/>
                  <category android:name="android.intent.category.DEFAULT"/>
                  </intent-filter>

                  <meta-data android:name="android.service.trust.trustagent"
                  android:resource="@xml/ghetto_trust_agent"/>
                  </service>
                  ...
                  </application>
                  </manifest>

                  The system Settings app scans app packages that match the intent filter shown above, checks if they hold the PROVIDE_TRUST_AGENT signature permission (defined in the android package) and shows them in the Trust agents screen (Settings->Security->Trust agents) if all required conditions are met. Currently only a single trust agent is supported, so only the first matched package is shown. Additionally, if the manifest declaration contains a <meta-data> tag that points to an XML resource that defines a settings activity (see below for an example), a menu entry that opens the settings activity is injected into the Security settings screen.

                  <trust-agent xmlns:android="http://schemas.android.com/apk/res/android"
                  android:title="Ghetto Unlock"
                  android:summary="A bunch of unlock triggers"
                  android:settingsActivity=".GhettoTrustAgentSettings" />


                  Here's how the Trusted agents screen might look like when a system app that declares a trusted agent is installed.

                  Trust agents are inactive by default, and are activated when the user toggles the switch in the screen above. Active agents are ultimately managed by the system TrustManagerService which also keeps a log of trust-related events. You can get the current trust state and dump the even log using the dumpsys command as shown below.

                  $ adb shell dumpsys trust
                  Trust manager state:
                  User "Owner" (id=0, flags=0x13) (current): trusted=0, trustManaged=1
                  Enabled agents:
                  org.nick.ghettounlock/.GhettoTrustAgent
                  bound=1, connected=1, managingTrust=1, trusted=0
                  Events:
                  #0 12-24 10:42:01.915 TrustTimeout: agent=GhettoTrustAgent
                  #1 12-24 10:42:01.915 TrustTimeout: agent=GhettoTrustAgent
                  #2 12-24 10:42:01.915 TrustTimeout: agent=GhettoTrustAgent
                  ...

                  Granting trust

                  Once a trust agent is installed, a trust grant can be triggered by any observable environment event, or directly by the user (for example, by via an authentication challenge). An often requested, but not particularly secure (unless using a WPA2 profile that authenticates WiFi access points), unlock trigger is connecting to a 'home' WiFi AP. This feature can be easily implemented using a broadcast receiver that reacts to android.net.wifi.STATE_CHANGE (see sample app; based on the sample in AOSP). Once a 'trusted' SSID is detected, the receiver only needs to call the grantTrust() method of the trust agent service. This can be achieved in a number of ways, but if both the service and the receiver are in the same package, a straightforward way is to use a LocalBroadcastManager (part of the support library) to send a local broadcast, as shown below.

                  static void sendGrantTrust(Context context,
                  String message,
                  long durationMs,
                  boolean initiatedByUser) {
                  Intent intent = new Intent(ACTION_GRANT_TRUST);
                  intent.putExtra(EXTRA_MESSAGE, message);
                  intent.putExtra(EXTRA_DURATION, durationMs);
                  intent.putExtra(EXTRA_INITIATED_BY_USER, initiatedByUser);
                  LocalBroadcastManager.getInstance(context).sendBroadcast(intent);
                  }


                  // in the receiver
                  @Override
                  public void onReceive(Context context, Intent intent) {
                  if (WifiManager.NETWORK_STATE_CHANGED_ACTION.equals(intent.getAction())) {
                  WifiInfo wifiInfo = (WifiInfo) intent
                  .getParcelableExtra(WifiManager.EXTRA_WIFI_INFO);

                  // ...
                  if (secureSsid.equals(wifiInfo.getSSID())) {
                  GhettoTrustAgent.sendGrantTrust(context, "GhettoTrustAgent::WiFi",
                  TRUST_DURATION_5MINS, false);
                  }
                  }
                  }


                  This will call the TrustAgentServiceCallback installed by the system lockscreen and effectively set a per-user trusted flag. If the flag is true, the lockscreen implementation allows the keyguard to be dismissed without authentication. Once the trust timeout expires, the user must enter their pattern, PIN or password in order to dismiss the keyguard. The current trust state is displayed at the bottom of the keyguard as a padlock icon: when unlocked, the current environment is trusted; when locked, explicit authentication is required. The user can also manually lock the device by pressing the padlock, even if an active trust agent currently has trust.

                  NFC unlock

                  As discussed in a previous post, implementing NFC unlock in previous Android versions was possible, but required some modifications to the system NFCService, because the NFC controller was not polled while the lockscreen is displayed. In order to make implementing NFC unlock possible, Lollipop introduces several hooks into the NFCService, which allow NFC polling on the lockscreen. If a matching tag is discovered, a reference to a live Tag object is passed to interested parties. Let's look into the how this is implementation in a bit more detail.

                  The NFCAdapter class has a couple of new (hidden) methods that allow adding and removing an NFC unlock handler (addNfcUnlockHandler() and removeNfcUnlockHandler(), respectively). An NFC unlock handler is an implementation of the NfcUnlockHandler interface shown below.

                  interface NfcUnlockHandler {
                  public boolean onUnlockAttempted(Tag tag);
                  }

                  When registering an unlock handler you must specify not only the NfcUnlockHandler object, but also a list of NFC technologies that should be polled for at the lockscreen. Calling the addNfcUnlockHandler() method requires the WRITE_SECURE_SETTINGS signature permission.

                  Multiple unlock handlers can be registered and are tried in turn until one of them returns true from onUnlockAttempted(). This terminates the NFC unlock sequence, but doesn't actually dismiss the keyguard. In order to unlock the device, an NFC unlock handler should work with a trust agent in order to grant trust. Judging from NFCService's commit log, this appears to be a fairly recent development: initially, the Settings app included functionality to register trusted tags, which would automatically unlock the device (based on the tag's UID), but this functionality was removed in favour of trust agents.

                  Unlock handlers can authenticate the scanned NFC tag in a variety of ways, depending on the tag's technology. For passive tags that contain fixed data, authentication typically relies either on the tag's unique ID, or on some shared secret written to the tag. For active tags that can execute code, it can be anything from an OTP to full-blown multi-step mutual authentication. However, because NFC communication is not very fast, and most tags have limited processing power, a simple protocol with few roundtrips is preferable. A simple implementation that requires the tag to sign a random value with its RSA private key, and then verifies the signature using the corresponding public key is included in the sample application. For signature verification to work, the trust agent needs to be initialized with the tag's public key, which in this case is imported via the trust agent's settings activity shown below.

                  Smart Lock

                  'Smart Lock' is just the marketing name for the GoogleTrustAgent which is included in Google Play Services (com.google.android.gms package), as can be seen from the dumpsys output below.

                  $ adb shell dumpsys trust
                  Trust manager state:
                  User "Owner" (id=0, flags=0x13) (current): trusted=1, trustManaged=1
                  Enabled agents:
                  com.google.android.gms/.auth.trustagent.GoogleTrustAgent
                  bound=1, connected=1, managingTrust=1, trusted=1
                  message=""



                  This trust agent offers several trust triggers: trusted devices, trusted places and a trusted face. Trusted face is just a rebranding of the face unlock method found in previous versions. It uses the same proprietary image recognition technology, but is significantly more usable, because, when enabled, the keyguard continuously scans for a matching face instead of requiring you to stay still while it takes and process your picture. The security level provided also remains the same -- fairly low, as the trusted face setup screen warns. Trusted places is based on the geofencing technology, which has been available in Google Play services for a while. Trusted places use the 'Home' and 'Work' locations associated with your Google account to make setup easier, and also allows for registering a custom place based on the current location or any coordinates selectable via Google Maps. As a helpful popup warns, accuracy cannot be guaranteed, and the trusted place range can be up to 100 meters. In practice, the device can remain unlocked for a while even when this distance is exceeded.

                  Trusted devices supports two different types of devices at the time of this writing: Bluetooth and NFC. The Bluetooth option allows the Android device to remain unlocked while a paired Bluetooth device is in range. This features relies on Bluetooth's built-in security mechanism, and as such its security depends on the paired device. Newer devices, such as Android Wear watches or the Pebble watch, support Secure Simple Pairing (Security Mode 4), which uses Elliptic Curve Diffie-Hellman (ECDH) in order to generate a shared link key. During the paring process, these devices display a 6-digit number based on a hash of both devices' public keys in order to provide device authentication and protect against MiTM attacks (a feature called numeric comparison). However, older wearables (such as the Meta Watch), Bluetooth earphones, and others are also supported. These previous-generation devices only support Standard Pairing, which generates authentication keys based on the device's physical address and a 4-digit PIN, which is usually fixed and set to a well-know value such as '0000' or '1234'. Such devices can be easily impersonated.

                  Google's Smart Lock implementation requires a persistent connection to a trusted device, and trust is revoked once this connection is broken. However, as the introductory screen (see below) warns, Bluetooth range is highly variable and may extend up to 100 meters. Thus while the 'keep device unlocked while connected to trusted watch on wrist' use case makes a lot of sense, in practice the Android device may remain unlocked even when the trusted Bluetooth device (wearable, etc.) is in another room.


                  As discussed earlier, an NFC trusted device can be quite flexible, and has the advantage that, unlike Bluetooth, proximity is well defined (typically not more than 10 centimeters). While Google's Smart Lock seems to support an active NFC device (internally referred to as the 'Precious tag'), no such device has been publicly announced yet. If the Precious is not found, Google's NFC-based trust agent falls back to UID-based authentication by saving the hash of the scanned tag's UID (tag registration screen shown below). For the popular NFC-A tags (most MIFARE variants) this UID is 7 bytes long and is not necessarily unique. While using the UID for authentication is a fairly wide-spread practice, it was originally intended for anti-collision alone, and not for authentication. On standard MIFARE tags the UID is read-only, but cards with rewritable UID do exists, so cloning a MIFARE trusted tag is quite possible. Tags can also be emulated with a programmable device such as the Proxmark III. Therefore, the security level provided by UID-based authentication is quite low.

                  Summary

                  Android 5.0 (Lollipop) introduces a new trust framework based on trust agents, which can notify the system when the device is in a trusted environment. As the system lockscreen now listens for trust events, it can change its behaviour based on the trust state of the current user. This makes it easy to augment or replace the traditional pattern/PIN/password user authentication methods by installing  trust agents. Trust agent functionality is currently only available to system applications, and Lollipop can only support a single active trust agent. Google Play Services provides several trust triggers (trustlets) under the name 'Smart Lock' via its trust agent. While they can greatly improve device usability, none of the currently available Smart Lock methods are particularly precise or secure, so they should be used with care.
                  Viewing all 50 articles
                  Browse latest View live