Keystone does not validate that s3tokens requests came from s3_token middleware

Bug #1566416 reported by Tim Burke
266
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Identity (keystone)
Fix Released
Undecided
Unassigned
OpenStack Security Advisory
Won't Fix
Undecided
Unassigned

Bug Description

Ordinarily, during a keystone-authenticated swift3 request,

 * the client talks to a swift proxy-server, sending a request with
   an Authorization header that includes a credential identifier and
   signature;
 * swift3 (in the proxy-server's pipeline) recognizes that this is
   a S3 request, normalizes the request information to be the same
   format as was used in the signature, and includes it in the WSGI
   environment;
 * s3_token picks up the normalized request, credential identifier,
   and user-provided signature and sends them all to keystone;
 * keystone's s3 extension uses the credential identifier to look up
   the credential's secret, signs the normalized request, and
   compares the user-provided signature to the expected one. If they
   match, a fresh project-scoped token is issued; otherwise a 401 is
   returned.
 * s3_token either bubbles up the 401 response or puts the token and
   tenant information into the WSGI environment so swift3 and
   auth_token may continue handling the request.

Given knowledge of the HTTP method, path, and headers of a valid past
request, an attacker can bypass swift and obtain a project-scoped
token directly from keystone. This may be done well after the initial
request (and resulting token) have expired. The new token may be used
to access any service for which the original user is authorized.

Proposed solution:

Have the s3 extension in keystone verify that the request came from
the s3_token middleware. This may be done via either a service token
(similar to what's done in keystonemiddleware's auth_token middleware)
or a rotating pre-shared key (similar to what's done in swift's
tempurl middleware).

Alternative solutions:

Have the s3 extension only enabled in the v2.0 admin pipeline,
remove it from the v3 pipeline, and expose the v2.0 admin endpoint
only on internal networks. With the convergence of the admin and
public pipelines in v3, this seems unlikely to provide a long-term
solution. However, it may be a useful mitigation for deployments
that cannot quickly upgrade.

Have the s3 extension parse the normalized request, find the
timestamp, and reject requests with a timestamp more than 5 minutes
off from the server's time. This conflates keystone's responsibility
(authentication) with that of swift3 (authorization) and leads to
duplicated efforts.

Additional mitigation:

Require users to change their S3/EC2 credentials regularly.

(Note that while this is similar to bug #1561199, the two bugs cover
separate issues. 1561199 involves inter-middleware communication
within a single WSGI pipeline. This bug involves the communication
between separate services.)

Revision history for this message
Tristan Cacqueray (tristan-cacqueray) wrote :

Since this report concerns a possible security risk, an incomplete security advisory task has been added while the core security reviewers for the affected project or projects confirm the bug and discuss the scope of any vulnerability along with potential solutions.

Changed in ossa:
status: New → Incomplete
description: updated
Revision history for this message
Tristan Cacqueray (tristan-cacqueray) wrote :

@keystone-coresec please triage this report

Revision history for this message
Guang Yee (guang-yee) wrote :

++ for the proposed solution to have the s3token API protected by a service token.

Come to think of it, both ec2 and s3 are more like "token validation" APIs from Keystone's perspective. The "token" is the authorization header.

In a production environment, where both Swift and Keystone are expected to be running behind a private network with controlled access, the risk of being able to capture/snoop the headers is similar to that of token validation. But otherwise, the header can only be reproducible if attacker have the user's signing key. If that's the case, its game over for that particular account.

Revision history for this message
David Stanek (dstanek) wrote :

I'm trying to grok the attack vectors here.
 1. User's signing key is compromised (as Guang mentioned, but not really the concern here)
 2. The service using s3_token is compromised and an attacker can see the HTTP transactions between the service and keystone.
 3. The attacker has compromised a different node, but is able to see the traffic between the service and keystone.

Are there any that I am missing?

I don't see how the proposed solution handles any of those cases. The alternative solution is more like what other protocols do and at least limits the timeframe of the abuse. We can also push SSL as a way to prevent #3.

Can you provide more details as to how the attacker gets access to those headers? It would make it much easier to make sure our solution prevents it.

Revision history for this message
Guang Yee (guang-yee) wrote :

David, one major concern is tempurl, where the signature is part of the URL, in which you shared with others. In that sense, those who have access to the tempurl can easily reconstruct the headers and therefore get a token on behalf of the owner.

Revision history for this message
Tim Burke (1-tim-z) wrote :

There are three networks in play:

 * client <-> swift
 * swift <-> keystone
 * client <-> keystone

A captured request on either of the first two will allow an attacker to obtain tokens on the third. Only the second is expected to be running on a private network.

Snooping on the client <-> swift connection should be preventable via SSL. However, a user may have exposed the details of a request with the expectation that it would no longer be useful (because the request was already expired), or that it would be of limited use (specifically, that it could only perform the same operation against swift, and only within the next few minutes). This may happen because:

 * The user is looking for support, and drops client logs on http://paste.openstack.org/
 * The user is giving a demo or presentation and wants to include client logs from several days ago as part of it.

Guang's point about tempurls is also valid; it would be a fairly safe guess that the method would be GET and that there would be no additional headers that were used in the signature calculation. Further, the URL-with-query-string would likely appear in swift's proxy access logs where it could be recovered from decommissioned hardware that wasn't properly wiped.

Revision history for this message
Dolph Mathews (dolph) wrote :

I haven't dug into the implementation of this, but is there not a nonce transmitted with the timestamp in requests from the client? If so, that should be all it takes to prevent replay attacks, but if it's there, I assume we're ignoring it on the service side?

Revision history for this message
Tim Burke (1-tim-z) wrote :

For v2 signatures, there's a date field included in the signature [1], which should be either a HTTP-date or timestamp from the Expires query param.

There's also in-progress work to add support for v4 signatures; nothing has landed in swift3 yet, but keystone knows how to compute them. For v4 signatures, the request date should appear in ISO8601 format on the second line of the string-to-sign [2].

In both cases, swift3 checks (or will check) it [3], but keystone just computes the expected signature and compares [4].

[1] https://github.com/openstack/swift3/blob/v1.10/swift3/request.py#L360-L363
[2] http://docs.aws.amazon.com/general/latest/gr/sigv4-create-string-to-sign.html
[3] https://github.com/openstack/swift3/blob/v1.10/swift3/request.py#L189-L222
[4] https://github.com/openstack/keystone/blob/9.0.0/keystone/contrib/s3/core.py#L70

Revision history for this message
Dolph Mathews (dolph) wrote :

I've not implemented signatures before, but shouldn't the nonce be persisted on the service side (at least as long as the 5 minute window you suggested in the original report) to prevent replay attacks?

Revision history for this message
Tim Burke (1-tim-z) wrote :

Sorry, I should have clarified: there is no nonce; the string that is signed is derived entirely from the client's request and credentials. A timestamp will necessarily be part of that, either as a Date header, x-amz-date header, or Expires query param. No part of the request or string-to-sign is issued by swift (or any of the various middlewares in swift's pipeline that are involved), and only the ec2 credentials (access & secret keys) are issued by keystone.

Revision history for this message
Dolph Mathews (dolph) wrote :

While persisting nonces would completely prevent replay attacks, that approach would not scale as well as the second proposed alternative solution (which allows replay attacks within a narrow window), which I think I'm in favor of, as it'd be the most backportable solution with the least impact (and I'd argue it's something we should be doing, regardless):

  Have the s3 extension parse the normalized request, find the timestamp,
  and reject requests with a timestamp more than 5 minutes off from the
  server's time.

Relatedly, do we support/acknowledge the Expires query parameter today?

Revision history for this message
Guang Yee (guang-yee) wrote :

Both ec2 and s3 have timestamps in the pre-sign payload. But they are very much protocol and version specific so we will have to account for all of them. Deciding on allowable time gap may not be as trivial though as each protocol is different. I remember Heat also supports pre-signed ec2 URLs. We will need to consult with them on the pre-signed time expectation so we don't break them.

Reason I favor the proposed solution was because it is consistent with our authtoken middleware. Its basically updating keystonemiddleware versus updating Keystone trade-off in terms of deployment mitigation. As far as backporting goes, it wouldn't be much of a difference whether we are backing to Keystone versus keystonemiddleware right?

Revision history for this message
Dolph Mathews (dolph) wrote :

The primary solution proposed is a relatively complicated patch, and -- I assume -- requires new configuration on the part of deployers (as part of a security patch, no less). Maybe it's a good long term solution?

Revision history for this message
Guang Yee (guang-yee) wrote :

Yes, I agree checking time gap is a much better alternative in terms of minimizing deployment impact. However, I am not sure how reliable it will be given that

1. we don't know what that time gap should be. I could be different per protocol and version. We could make it configurable. But I suspect it may not be a trivial exercise to figure out a default value.
2. even if we enforce time gap, attacker can still take advantage of that narrow window, whatever it may be.
3. deployer will also need to worry about time skew. Though it may not be a big deal as all the controllers are expected to be running NTP.

Also, we should consider advising deployers to not to directly expose the Keystone S3 and EC2 APIs to the public. They should be used by the internal IaaS services only.

Revision history for this message
Tim Burke (1-tim-z) wrote :

> Relatedly, do we support/acknowledge the Expires query parameter today?

Yes, presigned urls (which require the Expires query param) are supported. Swift3 doesn't currently have a limit on how far into the future the Expires timestamp can be (!), but according to [1] one week is the limit in AWS (at least for v4 signatures). We might be able to drop to 48 or even 24 hours without users complaining too much, but shorter than that will severely restrict the utility of presigned urls. On the other hand, though, 24 hours is an awfully long time to be able to impersonate a user.

> requires new configuration on the part of deployers

On the plus side, all of the new configuration (assuming a service token approach) should already be present for the authtoken middleware; it should just be a matter of copying it for s3_token.

> 3. deployer will also need to worry about time skew. Though it may not be a big deal as all the controllers are expected to be running NTP.

I wouldn't worry about that, for precisely the reason you've outlined.

> not to directly expose the Keystone S3 and EC2 APIs to the public

This is core to the problem. There's no reason that an end-user should ever hit the S3 extension themselves. However, I'm not clear on how deployers could limit that access given v3's move toward a single unified pipeline.

Stepping back for a moment, why does the S3 extension issue a token at all? Doesn't this create a separate token for every client request? Could we drop the token from the S3 extension's response, then have s3_token create its own token, cache it with a drastically shorter cache time (measured in seconds rather than minutes), and continue stuffing it into the WSGI environment for auth_token to consume?

[1] http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html

Revision history for this message
Guang Yee (guang-yee) wrote :

> This is core to the problem. There's no reason that an end-user should ever hit the S3 extension themselves. However, I'm not clear on how deployers could limit that access given v3's move toward a single unified pipeline.

This can easily be done at the proxy level. In fact, we are doing it at production. Just put up an API filter. In product, no body allow clients to directly connect to Keystone. It usually looks like this.

<client> -------> <Proxy/LB Cluster> -------> <Keystone Cluster> -------> <Backend Cluster>

> Stepping back for a moment, why does the S3 extension issue a token at all? Doesn't this create a separate token for every client request? Could we drop the token from the S3 extension's response, then have s3_token create its own token, cache it with a drastically shorter cache time (measured in seconds rather than minutes), and continue stuffing it into the WSGI environment for auth_token to consume?

Yes. It creates a token for every request. Swift uses the token data to perform authorization and ACL check. See

https://github.com/openstack/swift3

 As I mentioned before, Keystone S3 is really acting more like token validation.

Revision history for this message
Guang Yee (guang-yee) wrote :

I do agree that for S3 the token ID itself may not be needed. Just the token data should adequate.

Revision history for this message
Tim Burke (1-tim-z) wrote :

> This can easily be done at the proxy level. ... Just put up an API filter.

If that's something deployers are familiar with and comfortable doing, great. It would definitely require some better messaging, however; of the 18 public auth endpoints I could easily get from os-client-config, two-thirds in some way publicly expose the S3 extension. I have no idea how many of them actually use swift3, however; without users issuing valid swift3 requests/tempurls, there isn't really any risk.

> I do agree that for S3 the token ID itself may not be needed. Just the token data should adequate.

Yes, sorry; that's what I'd meant. We still need the user data, but there doesn't seem to be any need to have keystone create a persistent token that could later be validated or revoked. If we can remove the token dict from the generated response, I think it would both fix the issue (an attacker could discover who created the request/tempurl, but could not impersonate them) and provide an easy upgrade path (upgrade keystonemiddleware, then upgrade keystone).

Revision history for this message
Tristan Cacqueray (tristan-cacqueray) wrote :

IIUC a remote attacker knowing the credencial's uuid and a tempurl may obtain a valid keystone token. Since those requirements aren't very private and may be publicly shared, there is a security vulnerability here.

The MITM vector is less concerning since it should already be mitigated by TLS

Anyway if the fix is not backportable we need to get a Security Note... In the meantime, can we subscribe swift-coresec to have a look at proposed solution ?

Revision history for this message
Morgan Fainberg (mdrnstm) wrote :

The s3token middleware is being moved to the swift3 repository https://review.openstack.org/#/c/310892/

This means we need to land the fix in both places.

Revision history for this message
Morgan Fainberg (mdrnstm) wrote :

(I think the fix is in the middleware?)

Revision history for this message
Morgan Fainberg (mdrnstm) wrote :

Oh my mistake. This is not a fix for s3token middleware.

The s3token "extension" in keystone could be limited to only access from the s3token middleware (somehow, network, policy, etc).

Revision history for this message
Guang Yee (guang-yee) wrote :

I agree we need a security note at the very least.

Something operators can do in the short term:

1. do not expose those two APIs to the public endpoint. For example, if you are using haproxy.

acl is_ec2 url_end: /ec2token
acl is_s3 url_end: /s3token
http-request deny if is_ec3 or is_s3

Other long term options which may required code changes:

1. do not return the token_id (need to confirm this with Swift team)

2. make both API privileged. For example,

"identity:ec2_token": "rule:service_or_admin",
"identity:s3_token": "rule:service_or_admin",

This required reconfiguration of existing services.

3. validate time stamp as discussed above. Not sure if this option is doable yet.

Revision history for this message
Tristan Cacqueray (tristan-cacqueray) wrote :

Adding swift-coresec to check the first long term options of above comment #23.

Revision history for this message
Tristan Cacqueray (tristan-cacqueray) wrote :

Any progress ?

Revision history for this message
John Dickinson (notmyname) wrote :

I'm not sure swift core has much to say on this. It's mostly an issue between swift3 and s3_token. (And FWIW, Tim is core in both swift3 and swift.)

But that being said, seems that not returning the token in the s3_token response seems reasonable.

Revision history for this message
Steve Martinelli (stevemar) wrote :

If removing the token ID from the return addresses the issue but breaks an API, I am OK with that. The /v2.0/s3tokens API is completely undocumented and untested.

As a matter of fact, it's completely broken on master right now, we refactored a token method but didn't update it's reference in the s3 controller [1] because there are no tests for this feature at all. So if you ran this as of a week ago, you'll get an error.

FWIW I believe this "feature" existed before dolph's, morgan's and my own time on keystone. If

It should also be noted that resolving this bug by removing the token ID from the return should also resolve https://bugs.launchpad.net/ossa/+bug/1561199

[1] https://github.com/openstack/keystone/search?utf8=%E2%9C%93&q=issue_v2_token

Revision history for this message
Tim Burke (1-tim-z) wrote :

I took a stab at removing the need for an actual token ID in https://review.openstack.org/#/c/406424/ -- this should be done even outside of the security-bug context, and I don't think the patch gives away anything here, so I went ahead and submitted it publicly.

I wouldn't mind having someone from Keystone take a look at it, but it seems to work for our func tests. Next week I can add the missing unit tests, get feedback from Kota, and hopefully merge it. As long as we've got a plan for it, I think I'd be comfortable with having Keystone drop the ID any time.

This doesn't quite address bug 1561199 unfortunately. While it removes the impact to other services, any captured pre-signed URL would still allow an attacker to impersonate the signing user for all Swift resources. That's part of why I made them two separate bugs.

Revision history for this message
Tim Burke (1-tim-z) wrote :

Note that even though the endpoint is currently broken on master, newton and mitaka are still affected.

Revision history for this message
Tim Burke (1-tim-z) wrote :

This has been discussed at length, the entire endpoint is currently broken on both master and ocata, and while there's an existing patch to make the endpoint work again [1], it consciously avoids sending back a token ID. I think this could be made public. I'll start moving forward on doing the same for the related bug.

[1] https://review.openstack.org/#/c/437012/

Revision history for this message
Jeremy Stanley (fungi) wrote :

I agree public is fine as long as we also make bug 1561199 public, since some details from it are exposed by the discussion on this report.

Revision history for this message
Jeremy Stanley (fungi) wrote :

Switched to public security now that bug 1561199 has become public.

information type: Private Security → Public Security
description: updated
Revision history for this message
Tristan Cacqueray (tristan-cacqueray) wrote :

Does Keystone (>= ocata) need a change after-all?
It seems like all the other fixes has been merged on all supported release. Please correct me if I'm wrong, but to fully patch this bug and bug 1561199 we need:

swift:
* Ib90adcc2f059adaf203fba1c95b2154561ea7487 (Stop using client headers for cross-middleware communication)
 - https://review.openstack.org/438722 : fixes last release of ocata,
                                         2.11.0, 2.12.0 and 2.13.0 are still affected
 - https://review.openstack.org/438720 : fixes pike

swift3
* Ia3fbb4938f0daa8845cba4137a01cc43bc1a713c (Stop using client headers for cross-middleware communication)
  - https://review.openstack.org/438719
* I21e38884a2aefbb94b76c76deccd815f01db7362 (Only make one request to Keystone per S3 request)
  - https://review.openstack.org/406424
* I1c0e68a5276dd3dee97d7569e477c784db8ccb8a (Add s3token middleware to the swift3 project)
  - https://review.openstack.org/310892

Revision history for this message
Tim Burke (1-tim-z) wrote :

Yeah, I think we can close this for Keystone -- the S3 extension is currently broken on ocata, and when it was fixed for pike (https://review.openstack.org/#/c/437012/) it stopped sending back the token ID. If we feel like fixing the endpoint for ocata (there's a backport at https://review.openstack.org/#/c/501148/) it should take the same approach.

I guess close as... Fix Released? Invalid?

Revision history for this message
Colleen Murphy (krinkle) wrote :

https://review.openstack.org/#/c/501148 has merged so I will close this as fix released.

Changed in keystone:
status: New → Fix Released
Revision history for this message
Luan Nguyen (nguyenhuuluan434-w) wrote :

Hi all,
I have problem which same case , when using s3cmd with access_key and secret_key generate from 'openstack ec2 credentials create'. I can not list container or any action example s3cmd ls, s3cmd la,... I try to add some debug line in s3_token_middleware.py, the result return from keystone not contain token_id so swift proxy log " proxy-server: Deferring reject downstream". As description above when s3_token receive info from keystone's s3 extension it will parse and put token_id to WSGI environment and process base on storage URL.

The response from keystone will the input

def parse_v3_response(token):
    token = token['token']
    headers = {
        'X-Identity-Status': 'Confirmed',
        'X-Roles': ','.join(r['name']
                            for r in token['roles']),
        'X-User-Id': token['user']['id'],
        'X-User-Name': token['user']['name'],
        'X-User-Domain-Id': token['user']['domain']['id'],
        'X-User-Domain-Name': token['user']['domain']['name'],
        'X-Tenant-Id': token['project']['id'],
        'X-Tenant-Name': token['project']['name'],
        'X-Project-Id': token['project']['id'],
        'X-Project-Name': token['project']['name'],
        'X-Project-Domain-Id': token['project']['domain']['id'],
        'X-Project-Domain-Name': token['project']['domain']['name'],
    }
    return headers, None, token['project']

and the function parse_v3_response(token) used in :
        headers, token_id, tenant = parse_v3_response(token)

now the value of token_id is None but token_id assign to req.headers['X-Auth-Token']

https://github.com/openstack/swift3/blob/master/swift3/s3_token_middleware.py

Base on code of s3_token_middleware.py the middleware still need token_id for the rest processing.

Thank you for your attention.
Luan Nguyen.

Revision history for this message
Tim Burke (1-tim-z) wrote :

What's your swift proxy-server pipeline order? The combination of https://review.openstack.org/#/c/419721/ and https://review.openstack.org/#/c/406424/ caused it to need to change from something like

 ... swift3 s3token authtoken keystoneauth ... proxy-server

to

 ... authtoken swift3 s3token keystoneauth ... proxy-server

Revision history for this message
Luan Nguyen (nguyenhuuluan434-w) wrote :

Thanks Tim, the change solves my problem.

Changed in keystone:
milestone: none → queens-3
Revision history for this message
Jeremy Stanley (fungi) wrote :

The vulnerable branches are no longer officially supported, so I'm going to set our security advisory task to Won't Fix indicating we won't be publishing one about this.

Changed in ossa:
status: Incomplete → Won't Fix
To post a comment you must log in.
This report contains Public Security information  
Everyone can see this security related information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.