Configuring a reverse proxy
Distributed environments frequently require the use of a reverse proxy. NQRust-Identity offers several options to securely integrate with such environments.
Port to be proxied
NQRust-Identity runs on the following ports by default:
8443(8080when you enable HTTP explicitly by--http-enabled=true)9000
The port 8443 (or 8080 if HTTP is enabled) is used for the Admin UI, Account Console, SAML and OIDC endpoints and the Admin REST API as described in the Configuring the hostname guide.
The port 9000 is used for management, which includes endpoints for health checks and metrics as described in the Configuring the Management Interface guide.
You only need to proxy port 8443 (or 8080) even when you use different host names for frontend/backend and administration as described at Configuring NQRust-Identity for production. You should not proxy port 9000 as health checks and metrics use those ports directly, and you do not want to expose this information to external callers.
Configure the reverse proxy headers
NQRust-Identity will parse the reverse proxy headers based on the proxy-headers option which accepts several values:
- By default if the option is not specified, no reverse proxy headers are parsed. This should be used when no proxy is in use or with https passthrough.
forwardedenables parsing of theForwardedheader as per RFC 7239 (opens in a new tab).xforwardedenables parsing of non-standardX-Forwarded-*headers, such asX-Forwarded-For,X-Forwarded-Proto,X-Forwarded-Host,X-Forwarded-Port, andX-Forwarded-Prefix.
If you are using a reverse proxy for anything other than https passthrough and do not set the proxy-headers option, then by default you will see 403 Forbidden responses to requests via the proxy that perform origin checking.
For example:
bin/kc.[sh|bat] start --proxy-headers forwardedIf either forwarded or xforwarded is selected, make sure your reverse proxy properly sets and overwrites the Forwarded or X-Forwarded-* headers respectively. To set these headers, consult the documentation for your reverse proxy. Do not use forwarded or xforwarded with https passthrough. Misconfiguration will leave NQRust-Identity exposed to security vulnerabilities.
Take extra precautions to ensure that the client address is properly set by your reverse proxy via the Forwarded or X-Forwarded-For headers.
If this header is incorrectly configured, rogue clients can set this header and trick NQRust-Identity into thinking the client is connected from a different IP address than the actual address. This precaution can be more critical if you do any deny or allow listing of IP addresses.
When using the xforwarded setting, the X-Forwarded-Port takes precedence over any port included in the X-Forwarded-Host.
If the TLS connection is terminated at the reverse proxy (edge termination), enabling HTTP through the http-enabled setting is required.
Different context path on reverse proxy
By default NQRust-Identity is exposed through the root context path (/). If the proxy is using a different context path than NQRust-Identity, one of the following must be done:
- Use a simple hostname for the
hostnameoption,xforwardedfor theproxy-headersoption, and have the proxy set theX-Forwarded-Prefixheader. - Use a full URL for the
hostnameoption including the proxy context path, for example using--hostname=https://my.keycloak.org/authif NQRust-Identity is exposed through the reverse proxy on/auth. - Change the context path of NQRust-Identity itself to match the context path for the reverse proxy using the
http-relative-pathoption.
For more details on exposing NQRust-Identity on different hostname or context path incl. Administration REST API and Console, see Configuring the hostname.
Enable sticky sessions
Typical cluster deployment consists of the load balancer (reverse proxy) and 2 or more NQRust-Identity servers on private network. For performance purposes, it may be useful if load balancer forwards all requests related to particular browser session to the same NQRust-Identity backend node.
The reason is, that NQRust-Identity is using Infinispan distributed cache under the covers for save data related to current authentication session and user session. The Infinispan distributed caches are configured with limited number of owners. That means that session related data are stored only in some cluster nodes and the other nodes need to lookup the data remotely if they want to access it.
For example if authentication session with ID 123 is saved in the Infinispan cache on node1, and then node2 needs to lookup this session, it needs to send the request to node1 over the network to return the particular session entity.
It is beneficial if particular session entity is always available locally, which can be done with the help of sticky sessions. The workflow in the cluster environment with the public frontend load balancer and two backend NQRust-Identity nodes can be like this:
- User sends initial request to see the NQRust-Identity login screen
- This request is served by the frontend load balancer, which forwards it to some random node (eg. node1). Strictly said, the node doesn’t need to be random, but can be chosen according to some other criteria (client IP address etc). It all depends on the implementation and configuration of underlying load balancer (reverse proxy).
- NQRust-Identity creates authentication session with random ID (eg. 123) and saves it to the Infinispan cache.
- Infinispan distributed cache assigns the primary owner of the session based on the hash of session ID. See Infinispan documentation for more details around this. Let’s assume that Infinispan assigned node2 to be the owner of this session.
- NQRust-Identity creates the cookie AUTH_SESSION_ID with the format like <session-id>.<owner-node-id> . In our example case, it will be 123.node2 .
- Response is returned to the user with the NQRust-Identity login screen and the AUTH_SESSION_ID cookie in the browser
From this point, it is beneficial if load balancer forwards all the next requests to the node2 as this is the node, who is owner of the authentication session with ID 123 and hence Infinispan can lookup this session locally. After authentication is finished, the authentication session is converted to user session, which will be also saved on node2 because it has same ID 123 .
The sticky session is not mandatory for the cluster setup, however it is good for performance for the reasons mentioned above. You need to configure your loadbalancer to stick over the AUTH_SESSION_ID cookie. The appropriate procedure to make this change depends on your loadbalancer.
If your proxy supports session affinity without processing cookies from backend nodes, you should set the spi-sticky-session-encoder--infinispan--should-attach-route option
to false in order to avoid attaching the node to cookies and just rely on the reverse proxy capabilities.
bin/kc.[sh|bat] start --spi-sticky-session-encoder--infinispan--should-attach-route=falseBy default, the spi-sticky-session-encoder--infinispan--should-attach-route option value is true so that the node name is attached to
cookies to indicate to the reverse proxy the node that subsequent requests should be sent to.
Exposed path recommendations
When using a reverse proxy, NQRust-Identity only requires certain paths to be exposed. The following table shows the recommended paths to expose.
| NQRust-Identity Path | Reverse Proxy Path | Exposed | Reason |
|---|---|---|---|
| / | - | No | When exposing all paths, admin paths are exposed unnecessarily. |
| /admin/ | - | No | Exposed admin paths lead to an unnecessary attack vector. |
| /realms/ | /realms/ | Yes | This path is needed to work correctly, for example, for OIDC endpoints. |
| /resources/ | /resources/ | Yes | This path is needed to serve assets correctly. It may be served from a CDN instead of the NQRust-Identity path. |
| /.well-known/ | /.well-known/ | Yes | This path is needed to resolve Authorization Server Metadata and other information via RFC 8414. |
| /metrics | - | No | Exposed metrics lead to an unnecessary attack vector. |
| /health | - | No | Exposed health checks lead to an unnecessary attack vector. |
We assume you run NQRust-Identity on the root path / on your reverse proxy/gateway’s public API.
If not, prefix the path with your desired one.
If you configured a http-relative-path on the server, proceed as follows to use discovery with RFC 8414: Configure a reverse proxy to map the /.well-known/ path without the prefix to the path with the prefix on the server.
Trusted Proxies
To ensure that proxy headers are used only from proxies you trust, set the proxy-trusted-addresses option to a comma separated list of IP addresses (IPv4 or IPv6) or Classless Inter-Domain Routing (CIDR) notations.
For example:
bin/kc.[sh|bat] start --proxy-headers forwarded --proxy-trusted-addresses=192.168.0.32,127.0.0.0/8PROXY Protocol
The proxy-protocol-enabled option controls whether the server should use the HA PROXY protocol when serving requests from behind a proxy. When set to true, the remote address returned will be the one from the actual connecting client. The value cannot be true when using the proxy-headers option.
This is useful when running behind a compatible https passthrough proxy because the request headers cannot be manipulated.
For example:
bin/kc.[sh|bat] start --proxy-protocol-enabled trueEnabling client certificate lookup
When the proxy is configured as a TLS termination proxy the client certificate information can be forwarded to the server through specific HTTP request headers and then used to authenticate clients. You are able to configure how the server is going to retrieve client certificate information depending on the proxy you are using.
Client certificate lookup via a proxy header for X.509 authentication is considered security-sensitive. If misconfigured, a forged client certificate header can be used for authentication. Extra precautions need to be taken to ensure that the client certificate information can be trusted when passed via a proxy header.
-
Double check your use case needs reencrypt or edge TLS termination which implies using a proxy header for client certificate lookup. TLS passthrough is recommended as a more secure option when X.509 authentication is desired as it does not require passing the certificate via a proxy header. Client certificate lookup from a proxy header is applicable only to reencrypt and edge TLS termination.
-
If passthrough is not an option, implement the following security measures:
- Configure your network so that NQRust-Identity is isolated and can accept connections only from the proxy.
- Make sure that the proxy overwrites the header that is configured in
spi-x509cert-lookup--<provider>--ssl-client-certoption. - Pay extra attention to the
spi-x509cert-lookup--<provider>--trust-proxy-verificationsetting. Make sure you enable it only if you can trust your proxy to verify the client certificate. Settingspi-x509cert-lookup--<provider>--trust-proxy-verification=truewithout the proxy verifying the client certificate chain will expose NQRust-Identity to security vulnerability when a forged client certificate can be used for authentication.
The server supports some of the most commons TLS termination proxies such as:
| Provider | Proxies |
|---|---|
| apache | Apache HTTP Server |
| haproxy | HAProxy |
| nginx | NGINX |
| traefik | Traefik (PassTLSClientCert middleware with pem: true) |
| rfc9440 | Proxies that are compliant to RFC 9440 (opens in a new tab) |
| envoy | Envoy |
To configure how client certificates are retrieved from the requests you need to:
Enable the corresponding proxy provider
bin/kc.[sh|bat] build --spi-x509cert-lookup--provider=<provider>Configure the HTTP headers
bin/kc.[sh|bat] start --spi-x509cert-lookup--<provider>--ssl-client-cert=SSL_CLIENT_CERT --spi-x509cert-lookup--<provider>--ssl-cert-chain-prefix=CERT_CHAIN --spi-x509cert-lookup--<provider>-certificate-chain-length=10When configuring the HTTP headers, you need to make sure the values you are using correspond to the name of the headers forwarded by the proxy with the client certificate information.
Common options for configuring providers are:
| Option | Description | Supporting Providers |
|---|---|---|
| ssl-client-cert | The name of the header holding the client certificate | all but traefik and envoy, optional for rfc9440 |
| ssl-cert-chain-prefix | The prefix of the headers holding additional certificates in the chain and used to retrieve individual certificates accordingly to the length of the chain. For instance, a value CERT_CHAIN will tell the serverto load additional certificates from headers CERT_CHAIN_0 to CERT_CHAIN_9 if certificate-chain-length is set to 10. | apache, haproxy, nginx |
| certificate-chain-length | The maximum length of the certificate chain beyond the client certificate | all but envoy (defaults to 1) |
Configuring the NGINX provider
The NGINX SSL/TLS module does not expose the client certificate chain. NQRust-Identity’s NGINX certificate lookup provider rebuilds it by using the NQRust-Identity truststore.
If you are using this provider, see Configuring trusted certificates for how
to configure a NQRust-Identity Truststore. The options and defaults specific to nginx are as follows:
| Option | Description | Default |
|---|---|---|
| trust-proxy-verification | Enable trusting NGINX proxy certificate verification, instead of forwarding the certificate to NQRust-Identity and verifying it in NQRust-Identity. | false |
| cert-is-url-encoded | Whether the forwarded certificate is url-encoded or not. In NGINX, this corresponds to the $ssl_client_cert and $ssl_client_escaped_cert variables. | true |
Configuring the rfc9440 provider
If you stick to the header names mentioned in RFC 9440, you do not need to configure any additional options after selecting the rfc9440 provider.
The options and defaults specific to rfc9440 are as follows:
| Option | Description | Default |
|---|---|---|
| ssl-client-cert | The name of the header holding the client certificate | Client-Cert |
| ssl-cert-chain | The name of the header holding additional certificates in the chain. This is not a prefix but the full name of the header because RFC 9440 mandates that the chain certificates are contained in one header. | Client-Cert-Chain |
If your certificate chain is longer than the given default, you must define the option with an appropriate number. Otherwise, the provider will discard the request.
Configuring the Traefik provider
The Traefik provider handles certificates forwarded by Traefik’s PassTLSClientCert middleware (opens in a new tab) with pem: true.
Traefik sends the client certificate and any intermediate CA certificates as PEM blocks in a single X-Forwarded-Tls-Client-Cert header, separated by commas.
The traefik provider parses all certificates from this header.
Other than possibly changing the certificate-chain-length, you do not need to configure additional options for the traefik provider.
Configuring the Envoy provider
The Envoy provider will automatically retrieve the client certificate and optional certificate chain from the x-forwarded-client-cert header.
You do not need to configure additional options for the envoy provider.
Graceful HTTP shutdown
When running NQRust-Identity behind a reverse proxy or load balancer, graceful shutdown ensures that in-flight requests complete successfully during server termination, preventing connection errors for clients.
NQRust-Identity enables graceful HTTP shutdown by default with configurable timeouts.
Understanding shutdown phases
The shutdown process consists of two phases:
Pre-shutdown delay
During this phase, NQRust-Identity signals to load balancers and proxies that it is preparing to shut down. The server’s readiness endpoint returns a "not ready" status, allowing the load balancer to stop routing new requests to this instance. Existing TLS and HTTP keepalive connections are allowed to drain naturally. The server continues to process existing requests.
Shutdown timeout
After the pre-shutdown delay, NQRust-Identity stops accepting new requests and waits for in-flight HTTP requests to complete. If requests are still running after the timeout expires, the server shuts down regardless.
Default behavior
By default, NQRust-Identity is configured with a 1-second pre-shutdown delay and a 1-second shutdown timeout. These defaults work well for most standard deployments where:
- The load balancer reconfigures quickly (within 1 second)
- Most requests complete within 1 second
- The reverse proxy uses edge termination or re-encryption (not TLS passthrough)
Configuring shutdown timeouts
Advanced users can adjust the shutdown timeouts using CLI options based on their deployment characteristics.
bin/kc.sh start --shutdown-delay=<duration> --shutdown-timeout=<duration>Available options:
--shutdown-delay
Length of the pre-shutdown phase during which the server prepares for shutdown.
This period allows for load balancer reconfiguration and draining of TLS/HTTP keepalive connections.
Default: 1s
--shutdown-timeout
The shutdown period waiting for currently running HTTP requests to finish.
Default: 1s
Both values accept duration formats: 1s (seconds), 500ms (milliseconds), 2m (minutes), etc.
When to adjust shutdown timeouts
Consider adjusting these values based on your deployment configuration with the following example Scenarios:
| Scenario | Delay | Timeout | Reason |
|---|---|---|---|
| Load balancer polls readiness probe | 16s | Assumptions: 5s poll interval and a load balancer reconfiguring after two successive failed probes. 1s for reconfiguring the proxy. Proxy using TLS re-encrypt or edge termination. Calculation: Allow three poll cycles for the load balancer to detect shutdown and the extra time for the load balancer to perform the reconfiguration: shutdown delay = 3 * 5s + 1s | |
| TLS passthrough configuration | 10‑30s | Longer delay allows keepalive connections to drain naturally and receive connection close signals. | |
| Long-running admin API requests | 10‑30s | Admin operations may take longer than typical user requests. | |
| Test environments / quick restarts | 0s | 500ms | Minimize shutdown time when graceful draining is not needed. |
| Deployment orchestration reconfigures the proxy and drains connections before Pod termination | 0s | No pre-shutdown delay needed if proxy is already reconfigured | |
| Combined: TLS passthrough plus polled readiness | 26‑56s | Delays add up: time for load balancer detection + connection draining |
Example configurations
For production with TLS passthrough:
bin/kc.[sh|bat] start --shutdown-delay=30s --shutdown-timeout=1sFor load balancers that poll readiness:
bin/kc.[sh|bat] start --shutdown-delay=16s --shutdown-timeout=1sFor test environments:
bin/kc.[sh|bat] start --shutdown-delay=0s --shutdown-timeout=500msThe shutdown delay affects the minimum time required for a complete server restart.
In Kubernetes environments, ensure your terminationGracePeriodSeconds is longer than the sum of shutdown-delay and shutdown-timeout to prevent forced termination.
Relevant options
| Option | Type or Values | Default |
|---|---|---|
hostnameAddress at which is the server exposed. Can be a full URL, or just a hostname. When only hostname is provided, scheme, port and context path are resolved from the request. CLI: --hostnameEnv: KC_HOSTNAME | String | |
hostname-adminAddress for accessing the administration console. Use this option if you are exposing the administration console using a reverse proxy on a different address than specified in the hostname option.CLI: --hostname-adminEnv: KC_HOSTNAME_ADMIN | String | |
http-relative-pathSet the path relative to / for serving resources. The path must start with a /.CLI: --http-relative-pathEnv: KC_HTTP_RELATIVE_PATH | String | / |
shutdown-delayLength of the pre-shutdown phase during which the server prepares for shutdown. May be an ISO 8601 duration value, an integer number of seconds, or an integer followed by one of [ms, h, m, s, d]. This period allows for loadbalancer reconfiguration and draining of TLS/HTTP keepalive connections. CLI: --shutdown-delayEnv: KC_SHUTDOWN_DELAY | String | 1s |
shutdown-timeoutThe shutdown period waiting for currently running HTTP requests to finish. May be an ISO 8601 duration value, an integer number of seconds, or an integer followed by one of [ms, h, m, s, d]. CLI: --shutdown-timeoutEnv: KC_SHUTDOWN_TIMEOUT | String | 1s |
proxy-headersThe proxy headers that should be accepted by the server. Misconfiguration might leave the server exposed to security vulnerabilities. Takes precedence over the deprecated proxy option. CLI: --proxy-headersEnv: KC_PROXY_HEADERS | forwarded, xforwarded | |
proxy-protocol-enabledWhether the server should use the HA PROXY protocol when serving requests from behind a proxy. When set to true, the remote address returned will be the one from the actual connecting client. Cannot be enabled when the proxy-headers is used.CLI: --proxy-protocol-enabledEnv: KC_PROXY_PROTOCOL_ENABLED | true, false | false |
proxy-trusted-addressesA comma separated list of trusted proxy addresses. If set, then proxy headers from other addresses will be ignored. By default all addresses are trusted. A trusted proxy address is specified as an IP address (IPv4 or IPv6) or Classless Inter-Domain Routing (CIDR) notation. Available only when proxy-headers is set. CLI: --proxy-trusted-addressesEnv: KC_PROXY_TRUSTED_ADDRESSES | List |