-
Notifications
You must be signed in to change notification settings - Fork 41
Fix issue 2017: setting localhost as default in docker-compose based setups #2601
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Fix issue 2017: setting localhost as default in docker-compose based setups #2601
Conversation
…on to expose externally. [static] Signed-off-by: Pasindu Tennage <[email protected]>
…to chose between localhost and 0.0.0.0 [static] Signed-off-by: Pasindu Tennage <[email protected]>
…r deployment. Hardcoded to localhost, with no option to run externally. Since localnet only runs locally, no need to expose externally. [ci] Signed-off-by: Pasindu Tennage <[email protected]>
- "${SV_UI_PORT}:${SV_UI_PORT}" | ||
- "${APP_PROVIDER_UI_PORT}:${APP_PROVIDER_UI_PORT}" | ||
- "${APP_USER_UI_PORT}:${APP_USER_UI_PORT}" | ||
- "127.0.0.1:${SV_UI_PORT}:${SV_UI_PORT}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah your idea being that these should never be public? Hm not 100% sure about that change - perhaps someone do wants to make them public after all, as part of their testing?
Probably nicest to make them optionally public like for the other compose file?
cc @isegall-da WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have some mixed feeling about this, and the following is my reasoning.
-
In any realistic setting,
localnet
will never host a real deployment, and hence nothing bad can happen if no port is opened for the outside (the open ports accept traffic only from 127.0.0.1). -
Now
validator
is a bit tricky --Validator
initiates the connection with theSV
(and not the other way), and therefor it is OK if thevalidator
only accepts new connection requests (or just requests) fromlocalhost
. But, I am curious about the case where any other component (a front end) connects with thevalidator
, in which casevalidator
does have to listen to new connections from outside it's local machine. If the later is a realistic scenario, then the user has to put-E
in thestart.sh
. -
The
SV
case:SV
often gets external requests, so I believe it's default behavior should listen to 0.0.0.0 (all interfaces), instead oflocalhost
. But a counter argument to that is,SV
in production use helm charts, and in any realistic setting this docker-compose will be used for local testing. Hence for all practical purposes, I think it is ok to letSV
accept connections fromlocalhost
only.
@moritzkiefer-da Please let me know what you think.
Thanks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would actually drop SV and localnet from this PR altogether, and focus only on validators. Everything else is dev tooling, so I'd keep those simple (and vulnerable).
For validators, I think that your all-or-nothing approach of the -E flag makes sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd look at it this way:
The whole reason we're adding this is that it got flagged as a security issue. Imho the best course of action on that is to change the default for everything including localnet (so yes I strongly disagree with @isegall-da here) Yes it is only localnet but that doesn't really matter. If someone gets an RCE on your machine because you exposed localnet publicly that's bad.
Which then leaves the question of where we allow changing it. There I'd also keep it simple: Allow overwriting it everywhere. If nothing else this is useful for backwards compatibility and there is no real downside. Trying to justify why you really don't need it for e.g. localnet only to then have a user complain that we broke their setup with no option to fix it seems more trouble than just adding the flag.
Looks reasonable to me, nice work @pasindutennage-da !
|
[ci] Signed-off-by: Pasindu Tennage <[email protected]>
The CI error comes because the test is not exporting the |
The message was probably from stop.sh. You might want instead to just add HOST_BIND_IP here: splice/cluster/compose/validator/stop.sh Line 33 in 38602fd
|
Thanks! Changes lgtm overall, definitely should go in the release notes including mentioning the option to turn it back on. Also please ask someone on macos to test this (probably spin up localnet and check that you can still open the wallet or something like that) as we discussed in slack. Docker on macos can be a bit finicky |
Added release notes. Added LocalNet docker-compose the option to chose between 0.0.0.0 and 127.0.0.1 Added exports to stop.sh to avoie CI complains. [ci] Signed-off-by: Pasindu Tennage <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice job, thank you @pasindutennage-da !
Mainly to try it out (90% sure this doesn't break anything there), you might want to consider doing a (basic) cluster test as well before you merge: https://github.com/hyperledger-labs/splice/blob/main/TESTING.md#requesting-cluster-tests
|
||
- Replace ``-Dscala.concurrent.context.minThreads=8`` with ``-Dscala.concurrent.context.numThreads=8`` and set ``-XX:ActiveProcessorCount=8`` in the ``defaultJvmOptions`` for all the helm charts that deploy scala apps. This should ensure that the internal execution contexts spawn 8 threads to handle processing and that the JVM is configured for 8 CPUs as well. The previous behavior would spawn up to number of available processors, which can be up to the number of CPUs on the actual node if no CPU limit is set. This should avoid overloading the nodes during heavy processing. | ||
|
||
- Docker-compose based deployment of LocalNet, Validator, and SuperValidator expose only to 127.0.0.1 by default. If you want to expose externally, use ``-E`` in Validator and SuperValidator ``start.sh``. For LocalNet, ``export HOST_BIND_IP=0.0.0.0`` manually. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Docker-compose based deployment of LocalNet, Validator, and SuperValidator expose only to 127.0.0.1 by default. If you want to expose externally, use ``-E`` in Validator and SuperValidator ``start.sh``. For LocalNet, ``export HOST_BIND_IP=0.0.0.0`` manually. | |
- Docker-compose based deployments of LocalNet, validator, and SV now expose only to 127.0.0.1 by default. If you want to expose externally, use ``-E`` in validator and SV ``start.sh``. For LocalNet, set ``export HOST_BIND_IP=0.0.0.0`` manually. |
wording nit
...and actually, how annoying would it be to extend our integration tests for these flows to check that all opened ports are bound to localhost? I.e., |
From a quick look at the code there it looks like a reasonable investment; let's do this so we guard against regressions? |
Fixes #2017
Set localhost as the default in
SV
,localhost
, andvalidator
docker-compose incluster/compose
For
SV
andValidator
, there is an option to expose externally using-E
flag, in the respectivestart.sh
scripts.For
localnet
, it is hardcoded, because no extra traffic is expected.