Description
Perhaps this is just a request for guidance (if I am wrong), but I can't work out how to gracefully get a NiFi v2 container online behind an Openshift Route because of the "Invalid SNI" issue.
This issue (697) gave some hints using an ingress, but I haven't found a way using routes.
From tracing through the current solution what I think is happening is:
- The operator creates a
StatefulSet
which forcesNODE_ADDRESS
to exist during startup, but builds the value itself based on a internal address. - This is overwriting attempts to set
NODE_ADDRESS
using aConfigMap
. - The
nifi.properties
loadsNODE_ADDRESS
into bothnifi.cluster.node.address
andnifi.web.https.host
, with the whole file being drawn from aConfigMap
. - Attempts to edit
nifi.properties
in theConfigMap
are overwritten by the operator. - Attempts to edit the
StatefulSet
to adjust the CLI settingNODE_ADDRESS
are overwritten by the operator. - TBF I expected those just noting them as opposed to... attempts to add new environment variables into the
ConfigMap
are not overwritten.
Would it be as simple as allowing the nifi.web.https.host
to be overwritten by a new variable, like PUBLIC_ADDRESS
that we can set in the ConfigMap
? I gather this is what is causing Jetty to reject the traffic originating from the public route.
For NiFi v1.27.0, using a pass-through route 'just worked', but either upgrading or a fresh install of v2 are all failing. Unclear to me why the internal pod wants to perform this validation... and TBH I would love to disable it, but maybe it adds value. My last gasp is going to track down an admin that knows cert-fu and trying to switch to a Re-Encrypt route, but not looking forward to that.