THIS IS A TEST INSTANCE. ALL YOUR CHANGES WILL BE LOST!!!!
Proof of Concept Status
The current POC effort is proving the above flow and trying to minimize the work required on the UI.
The following notes reflect the POC state for the above flow:
- On first request to a given UI with the redirecting authentication handler, the hadoop auth filter sees that there is no hadoop auth cookie and delegates to the configured handler. The redirecting authentication handler looks for a simple cookie that represents a knoxsso token (this may be changed to a JWT bearer token or cookie). In the absence of this cookie, the handler redirects the browser to the configured endpoint for knoxsso and passes the original UI url as a request parameter "originalUrl". Example: http://localhost:8888/knoxsso?originalUrl=http://localhost:8888/app/
- The knoxsso endpoint has a number of filters. The first captures the original url parameter and creates a cookie called original-url to be used later to redirect the user to the UI once authentication has been successfully accomplished. Example: original-url http://localhost:8888/app/
- The next filter is the JBoss picketlink SPFilter for SAML service providers. It redirects the user to the IdP (shibboleth running in a centos VM hosted in jetty) to challenge for credentials where the user is currently authenticated against the Knox demo ApacheDS LDAP server. This could be any LDAP server or AD. Once the user is successfully authenticated the IdP redirects the user back the knoxsso endpoint. The capture filter ignores the incoming POST since it doesn't have the originalUrl parameter and allows the processing to go back to the picketlink filter where the assertion is accepted and the userid extracted and made available to the servlet programming model through HttpServletRequest.getUserPrincipal.
- The next filter is for redirecting back to the UI with a token that can be consumed by the UI authentication handler. This redirecting filter extracts the userid from getUserPrincipal and creates a cookie that simply has the username as the value. Example: hadoop-auth guest It then extracts the original-url from the cookie that was added by the capture filter and redirects the users with token cookie to the original url. Example: original-url http://localhost:8888/app/
- The hadoop auth filter on the UI endpoint accepts the requests but still finds no hadoop auth cookie and delegates once again to the redirect authentication handler. The auth handler finds the expected cookie/token and extracts the userid, creates a hadoop authentication token and returns it to the filter. The filter creates a hadoop auth cookie for this token and uses this for authentication until it expires and is no longer presented by the browser and we start back at #1.
Additional Notes
- Cookie domains may not need to be the same across all UIs using this approach
- In order to do a more complicated/secure token between knoxsso and the UI - we will need to verify signature using a common key. This will likely require the use of the KeyProvider API or CredentialProvider API. This will also require either:
- a central KMS provider that will allow contrained access to the same key materials by knoxsso and the UI auth handler
- separate keystores that will need the key provisioned independently and to stay in sync
- Normalizing on JWT as the token that is consumed by the UI auth handler will require some JWT parsing and verification code to be available in hadoop. Not sure if it can be put into hadoop auth module or whether it needs to go into common/security.
- This same architecture can be used with other implementations on the knoxsso side in place of the SAML/Shibboleth integration. We will have to make this configurable. The first filter will also capture the original url and the last will always redirect back to the original url. The processing that goes on in between can be pluggable to accommodate various integrations with SSO providers, simple hosted mechanisms (FORM, HTTP Basic), etc.