- PVSM.RU - https://www.pvsm.ru -
In the modern world popularity of a mobile applications continues to grow. So does OAuth 2.0 protocol on mobile apps. To make OAuth 2.0 protocol secure on mobile apps it's not enough to implement standard as is. One needs to consider the specifics of mobile applications and apply some additional security mechanisms.
In this article, I want to share the attacks on mobile OAuth 2.0 implementations and security mechanisms used to prevent such attacks. Concepts described in this article are not new but there is lack of the structured information on this topic. The main aim of the article is to fill this gap.
OAuth 2.0 is an authorization protocol that describes a way for a client service to gain a secure access to the user’s resources on a service provider. Thanks to OAuth 2.0, the user doesn’t need to enter his password outside the service provider: the whole process is reduced to clicking on «I agree to provide access to…» button.
A provider is a service that owns the user data and, by permission of the user, provides third party services (clients) with a secure access to this data. A client is an application that wants to get the user data stored by the provider.
Soon after OAuth 2.0 protocol was released, it was adapted for authentication, even though, it wasn’t meant for that. Using OAuth 2.0 for authentication shifts an attack vector from the data stored at the service provider to the client service user accounts.
But authentication was just a beginning. In times of mobile apps and conversion glorification, accessing an app with just one button sounded nice. Developers adapted OAuth 2.0 for mobile use. Of course, not many worried about mobile apps security and specifics: zap and into the production they went! Then again, OAuth 2.0 doesn’t work well outside of web applications: there are the same problems in both mobile and desktop apps.
So, let’s figure out how to make mobile OAuth 2.0 secure.
There are two major mobile OAuth 2.0 security issues:
Let’s look in-depth at these issues.
To understand the roots and consequences of the first issue, let’s see how OAuth 2.0 works in case of server-to-server interaction and then compare it with OAuth 2.0 in case of client-to-server interaction.
In both cases, it all starts with the client service registers on the provider service and receives client_id
and,
in some cases, client_secret. client_id
is a public value, and it’s required for the client service identification as opposed to client_secret
value, which is private. You can read more about the registration process in RFC 7591 [1].
The scheme below shows the way OAuth 2.0 operates in case of server-to-server interaction.
Picture origin: https://tools.ietf.org/html/rfc6749#section-1.2 [2]
OAuth 2.0 protocol can be divided into three main steps:
authorization_code
(hereinafter, code
).
code
to access_token
.
access_token
.Let’s elaborate on the process of getting code
value:
code
to the user browser which redirects code
to the client.
Let’s talk more the process of getting access_token
:
access_token
. Code
, client_secret
and redirect_uri
are included in the request.
code
, client_secret
and redirect_uri
, access_token
is provided.
Request for access_token
is done according to the server-to-server scheme: therefore, in general, attacker have to hack the client service server or service provider server in order to steal access_token
.
Now let’s look at mobile OAuth 2.0 scheme without backend (client-to-server interaction).
Picture origin: https://tools.ietf.org/html/rfc8252#section-4.1 [3]
The main scheme is divided into the same main steps:
code
.code
to access_token
access_token
However, in this case mobile app has also the server functions; therefore, client_secret
would be embedded into the application. As a result, client_secret
cannot be kept hidden from attacker on mobile devices. Embedded client_secret
can be extracted in two ways: by analyze the application-to-server traffic or by reverse engineering. Both can be easily realized, and that’s why client_secret
is useless on mobile devices.
You might ask: «Why don’t we get access_token
right away?» You might think that this extra step is unnecessary. Furthermore there’s Implicit Grant [4] scheme that allows a client to receive access_token
right away. Even though, it can be used in some cases, Implicit Grant [4] wouldn’t work for secure mobile OAuth 2.0.
In general, Custom URI Scheme [5] and AppLink [6] mechanisms are used for browser-to-app redirect. Neither of these mechanisms can be as secure as browser redirects on its own.
Custom URI Scheme (or deep link) is used in the following way: a developer determines an application scheme before deployment. The scheme can be any, and one device can have several applications with the same scheme.
It makes things easier when every scheme on a device corresponds with one application. But what if two applications register the same scheme on one device? How does the operation system decide which app to open when contacted via Custom URI Scheme? Android will show a window with a choice of an app and a link to follow. iOS doesn’t have a procedure for this [7] and, therefore, either application may be opened. Anyways, attacker gets a chance to intercept code or access_token [8].
Unlike Custom URI Scheme, AppLink guarantees to open the right application, but this mechanism has several flaws:
All these AppLink flaws increase the learning curve for potential service clients and may result in user OAuth 2.0 failure under some circumstances. That’s why many developers don’t choose AppLink mechanism as substitution for browser redirect in OAuth 2.0 protocol.
Mobile OAuth 2.0 problems have created some specific attacks. Let’s see what they are and how they work.
Let’s consider the situation where user device has a legitimate application (OAuth 2.0 client) and a malicious application which registered the same scheme as the legitimate one. The picture below shows the attack scheme.
Picture origin https://tools.ietf.org/html/rfc7636#section-1 [9]
Here’s the problem: at the fourth step, the browser returns the code
in the application via Custom URI Scheme and, therefore, the code
can be intercepted by a malicious app (since it’s registered the same scheme as a legitimate app). Then the malicious app changes code
to access_token
and receives access to the user’s data.
What’s the protection? In some cases, you can use inter-process communication; we'll talk about it later. In general, you need a scheme called Proof Key for Code Exchange [10]. It’s described in the scheme below.
Picture origin: https://tools.ietf.org/html/rfc7636#section-1.1 [11]
Client request has several extra parameters: code_verifier
, code_challenge
(in the scheme t(code_verifier)
) and code_challenge_method
(in the scheme t_m
).
Code_verifier
— is a random number with a minimum length of 256 bit [12], that is used only once [13]. So, a client must generate a new code_verifier
for every code
request.
Code_challenge_method
— this is a name of a conversion function, mostly SHA-256.
Code_challenge
— is code_verifier
to which code_challenge_method
conversion was applied to and which coded in URL Safe Base64.
Conversion of code_verifier
into code_challenge
is necessary to rebuff the attack vectors based on code_verifier
interception (for example, from the device system logs) when requesting code
.
In case when a user device doesn’t support SHA-256, a client is allowed to use plain conversion of code_verifier
[14]. In all other cases, SHA-256 must be used.
This is how this scheme works:
code_verifier
and memorizes it.
code_challenge_method
and receives code_challenge
from code_verifier
.
code
, with code_challenge
and code_challenge_method
added to the request.
code_challenge
and code_challenge_method
on the server and returns code
to a client.
access_token
, with code_verifier
added to it.
code_challenge
from incoming code_verifier
, and then compares it to code_challenge
, that it saved.
access_token
.
To understand why code_challenge
prevents code interception let’s see how the protocol flow looks from attacker’s perspective.
code
(code_challenge
and code_challenge_method
are sent together with the request).
code
(but not code_challenge
, since code_challenge
is not in the response).
access_token
(with valid code
, but without valid code_verifier
).
code_challenge
and raises an error message.
Note that the attacker can’t guess code_verifier
(random 256 bit value!) or find it somewhere in the logs (since first request actually transmitted code_challenge
).
So, code_challenge
answers the question of the service provider: «Is access_token
requested by the same app client that requested code
or a different one?».
OAuth 2.0 CSRF is relatively harmless when OAuth 2.0 is used for authorization. It’s a completely different story when OAuth 2.0 is used for authentication. In this case OAuth 2.0 CSRF often leads to account takeover.
Let’s talk more about the CSRF attack conformably to OAuth 2.0 through the example of taxi app client and provider.com provider. First, an attacker on his own device logs into attacker@provider.com
account and receives code
for taxi. Then he interrupts OAuth 2.0 process and generates a link:
com.taxi.app://oauth?
code=b57b236c9bcd2a61fcd627b69ae2d7a6eb5bc13f2dc25311348ee08df43bc0c4
Then the attacker sends this link to his victim, for example, in the form of a mail or text message from taxi stuff. The victim clicks the link, taxi app opens and receives access_token
. As a result, they find themselves in the attacker’s taxi account. Unaware of that, the victim uses this account: make trips, enters personal data, etc.
Now the attacker can log into the victim’s taxi account any time, as it’s linked to attacker@provider.com
[15]. CSRF login attack allowed the violator to steal an account.
CSRF attacks are usually rebuffed with a CSRF token (it’s also called state
), and OAuth 2.0 is no exception. How to use the CSRF token:
code
access request.
code
in its response.
CSRF token requirements: nonce [13] must be at least 256 bit and received from a good source of pseudo-random sequences.
In a nutshell, CSRF token allows an application client to answer the following question: «Was that me who initiated access_token
request or someone is trying to trick me?».
Mobile applications without a backend sometimes store hardcoded client_id
and client_secret
values. Of course they can be easily extracted by reverse engineering app.
Impact of exposing client_id
and client_secret
highly depends on how much trust service provider puts on the certain client_id
, client_secret
pair. One uses them just to distinguish one client from another while others open hidden API endpoints or make a softer rate limits for some clients.
The article Why OAuth API Keys and Secrets Aren't Safe in Mobile Apps [16] elaborates more on this topic.
Some malicious apps can imitate the legitimate apps and display a consent screen on their behalf (a consent screen is a screen where a user sees: «I agree to provide access to…»). User might click «allow» and provide the malicious app with his data.
Android and iOS provide the mechanisms of applications cross-check. An application provider can make sure that a client application is legitimate and vice versa.
Unfortunately, if OAuth 2.0 mechanism uses a thread via browser, it’s impossible to defend against this attack.
We took a closer look at the attacks exclusive to mobile OAuth 2.0. However, let’s not forget about the original OAuth 2.0: redirect_uri
substitution, traffic interception via unsecure connection, etc. You can read more about it here [17].
We’ve learned how OAuth 2.0 protocol works and what vulnerabilities it has on mobile devices. Now let’s put the separate pieces together to have a secure mobile OAuth 2.0 scheme.
Let’s start with the right way to use consent screen. Mobile devices have two ways of opening a web page in a mobile application.
The first way is via Browser Custom Tab (on the left in the picture). Note: Browser Custom Tab for Android is called Chrome Custom Tab, and for iOS – SafariViewController. It’s just a tab displayed in the app: there’s no visual switching between the applications.
The second way is via WebView (on the right in the picture) and I consider it bad in respect to the mobile OAuth 2.0.
WebView is a embedded browser for a mobile app.
"Embedded browser" means that access to cookies, storage, cache, history, and other Safari and Chrome data is forbidden for WebView. The reverse is also correct: Safari and Chrome cannot get access to WebView data.
"Mobile app browser" means that a mobile app that runs WebView has full access to cookies, storage, cache, history and other WebView data.
Now, imagine: a user clicks «enter with…» and a WebView of a malicious app requests his login and password from the service provider.
Epic fail:
Considering all the cons of WebView, an obvious conclusion offers itself: use Browser Custom Tab for consent screen.
If anyone has arguments in favor of WebView instead of Browser Custom Tab, I’d appreciate if you write about it in the comments.
We’re going to use Authorization Code Grant scheme, since it allows us to add code_challenge
as well as state
and defend against a code interception attack and OAuth 2.0 CSRF.
Picture origin: https://tools.ietf.org/html/rfc8252#section-4.1 [3]
Code access request (steps 1-2) will look as follows:
https://o2.mail.ru/code?
redirect_uri=com.mail.cloud.app%3A%2F%2Foauth&
state=927489cb2fcdb32e302713f6a720397868b71dd2128c734181983f367d622c24& code_challenge=ZjYxNzQ4ZjI4YjdkNWRmZjg4MWQ1N2FkZjQzNGVkODE1YTRhNjViNjJjMGY5MGJjNzdiOGEzMDU2ZjE3NGFiYw%3D%3D&
code_challenge_method=S256&
scope=email%2Cid&
response_type=code&
client_id=984a644ec3b56d32b0404777e1eb73390c
At step 3, the browser gets a response with redirect:
com.mail.cloud.app://oаuth?
code=b57b236c9bcd2a61fcd627b69ae2d7a6eb5bc13f2dc25311348ee08df43bc0c4&
state=927489cb2fcdb32e302713f6a720397868b71dd2128c734181983f367d622c24
At step 4, the browser opens Custom URI Scheme and pass CSRF token over to a client app.
access_token
request (step 5):
https://o2.mail.ru/token?
code_verifier=e61748f28b7d5daf881d571df434ed815a4a65b62c0f90bc77b8a3056f174abc&
code=b57b236c9bcd2a61fcd627b69ae2d7a6eb5bc13f2dc25311348ee08df43bc0c4&
client_id=984a644ec3b56d32b0404777e1eb73390c
The last step brings a response with access_token
.
This scheme is generally secure, but there are some special cases when OAuth 2.0 can be simpler and more secure.
Android has a mechanism of bidirectional data communication between processes: IPC (inter-process communication). IPC is better than Custom URI Scheme for two reasons:
access_token
.
Therefore, we can use Implicit Grant [4] to simplify mobile OAuth 2.0 scheme. No code_challenge
and state
also means less attack surface. We can also lower the risks of malicious apps acting like legitimate client trying to steal the user accounts.
Besides implementing this secure mobile OAuth 2.0 scheme, a provider should develop SDK for his clients. It’ll simplify OAuth 2.0 implementation on a client side and simultaneously reduce the number of errors and vulnerabilities.
Let me summarise it for you. Here is the (basic) checklist for secure OAuth 2.0 for OAuth 2.0 providers:
Access_token
and other sensitive data must be stored in Keychain for iOS and in Internal Storage for Android. These storages were specifically developed just for that. Content Provider can be used in Android, but it must be securely configured.
Client_secret
is useless, unless it’s stored in backend. Do not give it away to the public clients.
code_challenge
.
state
.
Code
must be used only once, with a short lifespan.
redirect_uri
check for an exact match and other recommendations for the original OAuth 2,0 are still in force.
Thanks to all who helped me write this article. Especially to Sergei Belov, Andrei Sumin, Andrey Labunets for the feedback about technical details, to Pavel Kruglov for the English translation and to Daria Yakovleva for the help with release of Russian version of this article.
Автор: nikitastupin
Источник [27]
Сайт-источник PVSM.RU: https://www.pvsm.ru
Путь до страницы источника: https://www.pvsm.ru/informatsionnaya-bezopasnost/321315
Ссылки в тексте:
[1] RFC 7591: https://tools.ietf.org/html/rfc7591
[2] https://tools.ietf.org/html/rfc6749#section-1.2: https://tools.ietf.org/html/rfc6749#section-1.2
[3] https://tools.ietf.org/html/rfc8252#section-4.1: https://tools.ietf.org/html/rfc8252#section-4.1
[4] Implicit Grant: https://tools.ietf.org/html/rfc6749#section-4.2
[5] Custom URI Scheme: https://developer.apple.com/documentation/uikit/core_app/allowing_apps_and_websites_to_link_to_your_content/communicating_with_other_apps_using_custom_urls
[6] AppLink: https://developer.android.com/training/app-links/verify-site-associations
[7] doesn’t have a procedure for this: https://developer.apple.com/library/archive/documentation/iPhone/Conceptual/iPhoneOSProgrammingGuide/Inter-AppCommunication/Inter-AppCommunication.html#//apple_ref/doc/uid/TP40007072-CH6-SW7
[8] a chance to intercept code or access_token: https://habr.com/company/mailru/blog/456702#1
[9] https://tools.ietf.org/html/rfc7636#section-1: https://tools.ietf.org/html/rfc7636#section-1
[10] Proof Key for Code Exchange: https://tools.ietf.org/html/rfc7636
[11] https://tools.ietf.org/html/rfc7636#section-1.1: https://tools.ietf.org/html/rfc7636#section-1.1
[12] with a minimum length of 256 bit: https://tools.ietf.org/html/rfc7636#section-7.1
[13] that is used only once: https://ru.wikipedia.org/wiki/Nonce
[14] client is allowed to use plain conversion of code_verifier
: https://tools.ietf.org/html/rfc7636#section-4.2
[15] attacker@provider.com
: mailto:attacker@provider.com
[16] Why OAuth API Keys and Secrets Aren't Safe in Mobile Apps : https://developer.okta.com/blog/2019/01/22/oauth-api-keys-arent-safe-in-mobile-apps
[17] here: https://sakurity.com/oauth
[18] fishing: https://ru.wikipedia.org/wiki/%D0%A4%D0%B8%D1%88%D0%B8%D0%BD%D0%B3
[19] implementing your own OAuth 2.0 scheme: https://hackerone.com/reports/314814
[20] 3-minute demo: https://www.youtube.com/watch?v=vjCF_O6aZIg&feature=youtu.be&t=233
[21] https://www.youtube.com/watch?v=vjCF_O6aZIg: https://www.youtube.com/watch?v=vjCF_O6aZIg
[22] https://hackerone.com/reports/55140: https://hackerone.com/reports/55140
[23] https://oauth.net/2/: https://oauth.net/2/
[24] https://tools.ietf.org/html/rfc8252: https://tools.ietf.org/html/rfc8252
[25] https://tools.ietf.org/html/rfc6819: https://tools.ietf.org/html/rfc6819
[26] https://developers.google.com/identity/protocols/OAuth2InstalledApp: https://developers.google.com/identity/protocols/OAuth2InstalledApp
[27] Источник: https://habr.com/ru/post/456702/?utm_campaign=456702&utm_source=habrahabr&utm_medium=rss
Нажмите здесь для печати.