OSW 2024

Agenda Wednesday

Talks & Tutorials Wednesday

Workload Identity - Scaling Security Up and Down

Justin Richer (Bespoke Engineering)

Workloads offer a unique environment for security and identity problems. We'll talk about the new work that's spinning up in the IETF's proposed WIMSE working group, and how that relates to OAuth, OpenID, and related security technologies.


OpenID for Verifiable Credentials Deep Dive

Kristina Yasuda, Torsten Lodderstedt (SPRIND)

OpenID for Verifiable Credentials (OID4VC) is a protocol family developed in the Digital Credentials Working Group (DCP WG) of the OpenID Foundation, which enables the development of Wallet-based applications. It has seen tremendous adoption in the last two years in different contexts and jurisdictions. This resulted in a lot of implementers feedback and increased participation in the WG. One of those initiatives, eIDAS 2.0, provided very high security and privacy requirements. This session will give an update on the latest developments of OID4VC and an outlook on the topics that are planned to be addressed. It will also cover additional aspects important to strengthen OID4VC ecosystem, such as conformance testing, security analysis results and overview of the available Open Source implementations.


Scope Customization at the Resource Server for Fine-grained Access Control

Michiel de Jong, Pieter van der Meulen (SURF)

In Research and education (R&E), we see new standards evolving (e.g. in AARC https://aarc-project.eu) that address the use-cases that are particularly relevant in that environment, and that are not addressed by existing OAuth standards. This is linked to a move away from authentication mechanisms like passwords, SSH keys and X.509 certificates towards token based authentication. In R&E, we see that the organisations in control of the various resource servers and clients work together in communities. These communities run an authorization server and manage trust and authorization policies. In this context we see that the authorization server does not have detailed knowledge of the resources and access modes that a resource server can offer, and thus is not well-placed to present a scope selection GUI that goes beyond using a generic description of the scope. This is a problem for use-cases where more fine-grained control over the level of access is required. E.g. granting access to a specific folder on a storage server, or granting access to a specific computing resource and that cannot be solved by establishing a list of generic scopes. Another usecase that we want to address is where there are several resource servers that offer a similar service using the same protocol, e.g. storage servers that offer WebDAV access to files. In this case, the user should be able to choose the resource server that they want to access, and the client should not have to know about the resource servers in advance. We aim for a solution that does not require changes to existing resources servers, and that will work with existing OAuth clients. We expect that our solution will be of interest to other communities that have similar requirements, where the authorization server and resource server are developed by different entities.

We considered using the Lodging Intent Pattern, a "pre-dance" where the client first obtains a structured scope before initiating the main OAuth dance, but we rejected this approach because it creates an undesirable many-to-many relationship between clients and the specific resource servers. Instead, we want to hide the resource server behind the authorization server which acts as a trusted broker between the various clients of various the organisations and the various resource servers of various (other) organisations.

We therefore want to propose an OAuth extension which adds a "scope picker" service close to each resource server, to which the authorization server redirects the user in a "sub-dance", leaving the GUI of the authorization server generic and easy to maintain. This works as follows: using the standard authorization code flow, the client redirects the user to the authorization server. The client can request a specific resource server by including an audience, or it can use a scope to request a specific type of service (e.g. WebDAV). The authorization server then redirects the user to a well-known endpoint on the resource server's scope picker service based on the client's and user's preferences and applies any restrictions based on the communities' policy. The scope picker shows a GUI in which the user can select e.g. a folder and set the level from the ones they have access to and set the level of access (read, write) and builds a structured scope from this information. This structured scope is given a human-readable name, either suggested by the scope picker or chosen by the user. The scope picker then redirects the user back to the authorization server including all the scope information, which is then able to display the human-readable description in its GUI, even though the authorization server doesn't understand what it stands for. Optionally protocol specific information (e.g. the WebDAV URL to use) is returned to the authorization server, this is required because the client may not have knowledge about the resource server that the user selected. When the user then 'grants access' on the authorization server, the OAuth authorization code flow is continued back to the client as normal. The client can then use its access token to request the protocol specific information from the authorization server using a well-known endpoint on the authorization server and use it to access the resource server.


PKI-based AuthN/AuthZ protocol for secret sharing applications

Aivo Kalu (Cybernetica AS)

Estonian government and citizens have been using a signature creation application DigiDoc4 (https://en.wikipedia.org/wiki/DigiDoc, https://github.com/open-eid/DigiDoc4-Client), which also has file encryption/decryption feature, for over 10 years. Current DigiDoc4 versions simply use national ID-card for direct encryption/decryption of the CEK (Content Encryption Key) of encrypted files container. Deployed direct encryption/decryption scheme is comparable to JWE key management schemes "RSA1_5", "RSA-OAEP" or "ECDH-ES", however, this relies on specific encryption/decryption primitives provided by smart-cards. The new version of DigiDoc4 needs to allow usage of any PKI-based authenticators, including those which don't provide encryption/decryption cryptographic primitives or don't support key establishment protocols and only provide "sign(hash)"-style cryptographic primitive for authentication purposes. Such authenticators include some eIDAS1 eID means and also FIDO tokens with only CTAP.authenticatorGetAssertion() methods.


Talk proposes a way how to overcome limitations of such authenticators. First, when preparing encrypted files container, the CEK decryption key is split into shares, using Shamir's secret sharing scheme and uploaded to multiple key capsule transmission servers. Recipient of the encrypted container would have to authenticate to those servers and download enough of the shares to reconstruct the whole CEK decryption key. This way, encrypted files are transmitted from Sender to Recipient in regular transmission channel (for example e-mail) and key material required to decrypt the files are transmitted via independent secondary channel (key capsule transmission servers). Using strong authentication in the secondary channel, we can achieve reasonable level of security for encryption/decryption use case.


However, authentication protocol has some special requirements:


1) Recipient needs to authenticate to multiple key capsule transmission servers.

2) Recipient should use PKI-based authenticator once, i.e. we can only create a single signature during the authentication flow.

3) We cannot introduce additional central trusted component to act as (OAuth2) authorization server. Such component would have access to all transmitted shares, essentially to every CEK of every container.

4) Servers do not trust each other and shouldn't be able to replay access tokens or authentication tokens to each other.


It turns out that existing standards-based authz protocols (such as OAuth2+DPOP, OAuth2 with JWT access tokens, ...) do not fulfill all requirements and are not directly applicable to such architecture. Thus we designed a custom authentication protocol. In short, steps of the protocol are following:


1. Recipient requests each server to issue a nonce.

2. Recipient constructs authentication data, which contains all hashed nonces. The nonces are hashed in order to hide the clear-text values. This idea could be compared to OAuth2 PKCE method which uses code verifier and code challenger in a similar way.

3. Recipient creates a signature on the authentication data with authenticator. This could be compared to self-signed JWT access token.

4. Recipient constructs different authentication token for each server. Token contains the signature, one clear-text nonce from server which generated it, and rest of nonces in hashed form. Recipient authenticates to each server with such unique token.


Receiving authentication token, server is able to verify that Recipient presents nonce N_i, generated by server itself. Next, server is able to reconstruct the authentication data and is able to verify the signature. However, server is not able to replay this authentication token to other servers, because it doesn't have clear-text values for other nonces generated by other servers.


Talk outlines application architecture and protocol requirements, explains the details of the custom protocol, and compares the approach with existing protocols from OAuth2 family. We hope that combination of secret sharing scheme and authentication protocol will be interesting to audience. In case there are interested parties, a standardized approach could be developed in the future as an extension of JWE key management modes.


Enhancing User Experience in OAuth-Based Native Apps Without Compromising Security

Janak Amarasena (WSO2)

OAuth 2.0 paired with OpenID Connect is the de facto standard for application authentication. While the Authorization Code flow is widely recommended and effective across various scenarios, its usage of browser redirection often presents a less-than-ideal user experience in native applications. As a result, developers tend to prioritize user experience over security best practices to provide an optimal experience to users. This could include the use of embedded web views for authentication, password grant, etc. Adding to that, the implementation can become more complex and possibly even less secure when incorporating multi-factor authentication and social login integration, which are common requirements these days.


In this session we will discuss a solution which developers can use to incorporate authentication and authorization to their apps in an API-centric manner where the developer could achieve the needed user experience while maintaining the desired level of security. In brief, the solution entails the developer starting an authorization flow and then continuing the flow with a newly introduced endpoint which is capable of handling complex authentication requirements. We will walk through the overall design of the flow, the security considerations including identified threats and mitigations, certain design decisions that had to be taken to optimize the experience, where this flow would fit best while maintaining the desired level of security and a demonstration of an implementation of this flow.