Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: Lighter weight Java runtime like OpenJ9 #373

Open
paulschmeida opened this issue Nov 19, 2023 · 53 comments
Open

[Feature]: Lighter weight Java runtime like OpenJ9 #373

paulschmeida opened this issue Nov 19, 2023 · 53 comments
Assignees
Labels
enhancement New feature or request help wanted Extra attention is needed

Comments

@paulschmeida
Copy link

What problem are you looking to solve?

Since a lot of people run this software for their home networks with just a couple of APs, having a docker container with full-blown vesion of Java seems like an overkill.

Describe the solution that you have in mind

I propose the switch to a lighter wait version of JRE, one that's built specifically for containers. Now bear in mind I'm not a Java developer, but I've been told that OpenJ9 from IBM Semeru for example is more lightweight and can run the same java apps at 1/3 memory footprint and lower CPU usage.

Additional Context

No response

@paulschmeida paulschmeida changed the title [Feature]: [Feature]: Lighter weight Java runtime like OpenJ9 Nov 19, 2023
@mbentley
Copy link
Owner

I can't say I know anything about OpenJ9/IBM Semeru so I can't really be sure about comparisons to OpenJDK JRE 17, which we are using today. My main concerns would be things like:

  1. Supportability - is TP-Link going to give me any grief if I have problems with the controller and I am not running an Oracle JRE or OpenJDK JRE?
  2. Extra maintenance - is it worth the extra effort adding in the scripting to install a different JRE? Installing OpenJDK JRE is just a pre-packaged .deb in the Ubuntu repos which is simple where I would need to get the latest version of the OpenJ9 package, install it and probably do some environment configuration to get it working, and that is assuming that they don't decide to change anything about the packaging randomly in the future. It's not a huge deal but something I wouldn't have to worry about otherwise.
  3. Benefit - is there enough of a benefit running an Omada Controller with a different JRE to warrant the extra effort? I'd need to see some sort of proof of concept to show that it is worth the effort. Even just a hacky branch that gets it installed and can show startup time benefit and longer running resource benefit would be good. It's also helpful to keep in mind that MongoDB is in the standard image so when doing resource comparisons, it would probably be a good idea to keep the MongoDB separate to more easily compare apples to apples with the two JREs.

I can put trying to put together a proof of concept together but it'd be something that would be on the back burner for me in all honesty. I know what I am getting with OpenJDK JRE and I know the support lifecycle because it's packaged with Ubuntu.

@mbentley mbentley added enhancement New feature or request help wanted Extra attention is needed labels Nov 25, 2023
@ktims
Copy link

ktims commented Jan 27, 2024

I had a go at this on https://github.com/ktims/docker-omada-controller/tree/openj9-testing, initial results are promising, I see more than 45% reduction in container memory utilization. It seems to work fine though I haven't tested it extensively or with a 'real' workload.

OpenJDK:

$ podman run --name openjdk --network=host --rm -it docker.io/mbentley/omada-controller
# wait for application to be ready for login, go through setup wizard, log in
$ podman stats
ID            NAME        CPU %       MEM USAGE / LIMIT  MEM %       NET IO      BLOCK IO    PIDS        CPU TIME      AVG CPU %
393872374d1f  openjdk     0.77%      1.722GB / 50.43GB  3.42%       0B / 0B     0B / 0B     231         1m35.085021s  131.58%

OpenJ9:

$ podman run --name openj9 --network=host --rm -it 1eddddeef383ebc8cac7c546e9c8653d96da03ace7a6709530fccd85d738f99a
# ...
$ podman stats
ID            NAME        CPU %       MEM USAGE / LIMIT  MEM %       NET IO      BLOCK IO    PIDS        CPU TIME     AVG CPU %
17ae8110009e  openj9      0.85%       864.9MB / 50.43GB  1.72%       0B / 0B     0B / 0B     235         1m1.456071s  59.71%

What's more, OpenJ9 is aware of container memory restrictions, so if I'm really rude to the container and only give it 512m RAM, it can be even more aggressive:

$ podman run -m 512m --name openj9 --network=host --rm -it 1eddddeef383ebc8cac7c546e9c8653d96da03ace7a6709530fccd85d738f99a
# ...
# podman stats
ID            NAME        CPU %       MEM USAGE / LIMIT  MEM %       NET IO      BLOCK IO    PIDS        CPU TIME      AVG CPU %
e8ed45b24ee7  openj9      2.65%       337.6MB / 536.9MB  62.87%      0B / 0B     0B / 0B     258         1m42.314089s  89.80%

@mbentley
Copy link
Owner

mbentley commented Jan 27, 2024

Interesting. Do you have example install code so I could take a look as well? Significant memory reduction could be really interesting, especially for the lower powered systems a lot of people tend to run this on.

Also, I'd like to do some tests with MongoDB and the controller running as separate processes to get metrics from the only the controller to remove that variable.

@ktims
Copy link

ktims commented Jan 27, 2024

Not sure what you mean about install code, everything you need to try it should be in my openj9-testing branch. I made the following modifications:

  • Dockerfile base against ibm-semeru-runtimes:open-17-jre-focal instead of your Ubuntu image, not sure what you changed from the stock Ubuntu image but this one from IBM seems to work fine. I didn't quickly find any other easy way to get binaries for OpenJ9.
  • Removed absolute path to the java command (the IBM images have it in /opt somewhere, it's in $PATH).
  • Added -Xtune:virtualized runtime flag which is recommended for containerized deployments of OpenJ9
  • Don't install OpenJDK in the install.sh

That's all that was required to get it up and running.

OpenJ9 definitely feels subjectively slower when it's cold, but once warmed up it actually seems to outperform OpenJDK based on pageload timing, which is pretty surprising to me.

Test based on the docker-compose.yaml in my branch (but I built 5.13 with NO_MONGODB), running with the 512MB memory constraint for OpenJ9 (which gives it more memory than before, alongside mongodb) and separate mongodb. After a few minutes of poking around the interface of both instances:

ID            NAME            CPU %       MEM USAGE / LIMIT  MEM %       NET IO             BLOCK IO    PIDS        CPU TIME      AVG CPU %
2d9a29df8835  mongodb2        0.49%       195.7MB / 50.43GB  0.39%       751.6kB / 1.86MB   0B / 0B     38          4.329065s     1.58%
b4b69f290cf3  omada_original  0.59%       1.585GB / 50.43GB  3.14%       1.911MB / 760.3kB  0B / 0B     185         1m50.125153s  40.33%
ba19b583b94e  omada_openj9    0.62%       348.5MB / 536.9MB  64.91%      1.873MB / 753.7kB  0B / 0B     219         1m55.460964s  42.44%
eb521ccda5ce  mongodb         0.50%       193.2MB / 50.43GB  0.38%       759kB / 1.898MB    0B / 0B     38          4.284706s     1.56%

@bartong13
Copy link

This looks very promising for us 'home' users. I have an RPI4 2GB which I use to host a few other containers and sits about 1GB usage, but the current full-OpenJDK image just pushes it too far and hits OOM issues. Limiting the container memory to 800MB stops the OOMs but the controller software becomes unusable. So I'm running on a desktop host instead, which is not ideal because I would prefer to use a low power device so the controller can be left running 24/7.

If @ktims testing is anything to go by I would be able to run the OpenJ9 image and still have a bit of memory spare.

Apologies I cannot assist with development, but I am happy to assist with testing it in a 'production' environment if it gets to that stage (1 switch and 3 EAPs with approx 30 clients max)

@mbentley
Copy link
Owner

Sorry for the lack of progress so far. I've added this to my backlog of things to look at further.

@jinkazph
Copy link

Sorry for the lack of progress so far. I've added this to my backlog of things to look at further.

Nice.. Good to hear..

@mbentley
Copy link
Owner

I started some work on a custom base image for OpenJ9 because I prefer to have consistency & control over the ability to patch the base image. It's nothing crazy (Dockerfile for this), just taking the OpenJ9 images, grabbing the JRE + the share classes cache and putting it in an image (on Docker Hub). OpenJ9 only has arm64 and arm64 builds available but I don't really see that as a problem as the armv7l images are already doing something different as it is today. My builds really aren't doing anything different than the ibm-semeru-runtimes images but I can quickly patch the underlying ubuntu:20.04 image this way with minimal effort and not having to really build anything.

I hope to get a chance to test this out later today with an actual build so I can do some comparisons myself. If I do, I'll update here and probably put up a branch of what I have.

@mbentley
Copy link
Owner

OK, so I have a branch https://github.com/mbentley/docker-omada-controller/tree/openj9 that seems to work (comparison from master).

I just built a couple of test images for amd64 and arm64:
mbentley/omada-controller:5.13-openj9test-amd64
mbentley/omada-controller:5.13-openj9test-arm64

@bartong13
Copy link

Thanks @mbentley if we run a container from this image can it retain the same volumes as a container that was running the OpenJDK image? Or would it be better to set this up as a "fresh" container and use the controller migration within the omada controller software to move devices across instead?

@mbentley
Copy link
Owner

It should be fine but I would make sure to take backups before hand (you should be taking regular backups anyway - autobackups are built into the controller software unless you haven't enabled them). Keep in mind, this isn't merged into master as I need to do some more testing so there may be some changes but I intend of them to not be breaking changes.

@bartong13
Copy link

Yeah for sure, always backup haha.

Do you envisage having two images once this is work is complete? A "full" image plus a "lite" image, so to speak? Or are you actually thinking you'll permanently switch to using OpenJ9 going forward?

@mbentley
Copy link
Owner

I did some comparisons between the normal OpenJDK, OpenJ9, and OpenJ9 with -Xtune:virtualized in terms of resource consumption:

# without -Xtune:virtualized on OpenJ9
$ docker stats --no-stream omada-controller omada-controller-oj9
CONTAINER ID   NAME                   CPU %     MEM USAGE / LIMIT     MEM %     NET I/O           BLOCK I/O        PIDS
8f7348b05774   omada-controller       1.04%     1.519GiB / 125.5GiB   1.21%     9.42kB / 4.85kB   860kB / 33.6MB   179
13f45ffff47d   omada-controller-oj9   1.35%     783.1MiB / 125.5GiB   0.61%     0B / 0B           0B / 24.5MB      160

# with -Xtune:virtualized on OpenJ9
$ docker stats --no-stream omada-controller omada-controller-oj9
CONTAINER ID   NAME                   CPU %     MEM USAGE / LIMIT     MEM %     NET I/O           BLOCK I/O         PIDS
8f7348b05774   omada-controller       1.53%     1.512GiB / 125.5GiB   1.21%     9.42kB / 4.85kB   860kB / 31.2MB    181
74da808fd8d3   omada-controller-oj9   1.40%     750.1MiB / 125.5GiB   0.58%     9.45kB / 4.92kB   20.1MB / 26.2MB   170

For JVM startup times on a clean install:

  • OpenJDK - 1:14
  • OpenJ9 - 1:15
  • OpenJ9 w/-Xtune:virtualized - 1:16

So startup times were pretty close to identical on my server.

@mbentley
Copy link
Owner

Do you envisage having two images once this is work is complete? A "full" image plus a "lite" image, so to speak? Or are you actually thinking you'll permanently switch to using OpenJ9 going forward?

Ideally, I would like to not add additional images to build as I am currently building 26 images just for the Omada Controller versions that are "supported". Add in those that are "archived" (I don't build these daily), it's 65. This includes amd64, arm64, armv7l, and the versions with chrome for report generation.

My thought being that OpenJ9 is basically an extended OpenJDK so I would hope that there are no regressions or complications from switching.

@bartong13
Copy link

I did some comparisons between the normal OpenJDK, OpenJ9, and OpenJ9 with -Xtune:virtualized in terms of resource consumption:

Did you have any 'load' on the controllers during these tests? ie: Were there actually any devices adopted into the controllers, any traffic flowing on the devices under their control, etc?

@mbentley
Copy link
Owner

Did you have any 'load' on the controllers during these tests? ie: Were there actually any devices adopted into the controllers, any traffic flowing on the devices under their control, etc?

No, this was just for overall startup of a brand new controller, no devices under management. Wasn't yet ready to try anything with my own instance yet.

@mstoodle
Copy link

mstoodle commented Feb 27, 2024

For JVM startup times on a clean install:
OpenJDK - 1:14
OpenJ9 - 1:15
OpenJ9 w/-Xtune:virtualized - 1:16

You may want to try populating your own shared classes cache in your image build step rather than copying the one from the original containers (assuming I understood what you wrote earlier about creating your own containers). If you do an application startup inside the build step, it should create a custom shared classes cache for your app inside your container that can then start your container more quickly. Even better if there is a way to run some load in that build step, because then you'll be able to get JIT compiled code cached right into your container image (if you keep that -Xtune:virtualized option). Hopefully then you'll see some improvement in the startup times with OpenJ9, and if you don't there are some diagnostics we could look into to try to understand why.

Great to see people getting value from Eclipse OpenJ9 !! Best of luck!

@mbentley
Copy link
Owner

Thanks @mstoodle for the tip! That's correct, I am just pulling the shared classes cache from the Docker Hub image so it sounds like I have some playing around to do to see what might be possible to optimize the cache. I have limited information about the software app itself.

If you don't mind me asking, one thing I would be curious about would be if one approach could be to use a shared classes cache that is persistent in a directory outside of the container that is read/write. The first startup of the app wouldn't be optimized but I would image that subsequent startup and running of that app may? Is that an approach worth investigating or would that be an anti-pattern? Just curious as getting this app started as a part of a build step might introduce some complexity on my end that might be a bit funky and resource intensive considering this app needs and auto-starts MongoDB as well. I'll also see if I can locate the documentation on shared classes caching works as I will admit I haven't even looked yet.

@mstoodle
Copy link

mstoodle commented Feb 29, 2024

Hi @mbentley . You can configure the cache to reside on a docker volume if you like, but it gets troublesome to manage (at least in general deployments; not sure if the complexity would be there in your case). But there are advantages to having the cache inside the container. You can make the prepopulated cache layer read-only which speeds up the access to it. If you have people who build image layers on top of yours, they can add their own layer to the shared cache too (it's designed to work alongside how docker layers work).

Quick references to the shared cache doc: https://eclipse.dev/openj9/docs/shrc/ or, if you prefer blogs you can look at some here: https://blog.openj9.org/tag/sharedclasses/ . If you have questions, you can @ mention me here and I'll try to respond.

@IAmKonni
Copy link

Do you have a recent version of this test image? mbentley/omada-controller:5.13-openj9test-amd64 gives me 5.13.23 and not the latest 5.13.30 stable release. I would like to gove it a try. Maybe I can help you with that. Some years ago I was a Java developer. :)

@mbentley
Copy link
Owner

Sorry, I haven't had to chance to follow up on anything further but I was able to build some new images using the latest version just now:

mbentley/omada-controller:5.13-openj9test - multi-arch (auto-selects amd64 or arm64)
mbentley/omada-controller:5.13-openj9test-amd64 - amd64 specific tag
mbentley/omada-controller:5.13-openj9test-arm64 - arm64 specific tag

I've done no further testing on them yet but I assume they start up :)

@IAmKonni
Copy link

Switched to this test image today and no problems so far.

image

@jinkazph
Copy link

Stable also for me. Already using it for a month..

@mbentley
Copy link
Owner

At this point, I would like to work on better understanding a shared classes cache pattern that makes the most sense for how the app runs in a container. I see a lot of experimentation in the future to make that happen.

@eblieb
Copy link

eblieb commented May 1, 2024

I am pretty new to docker, to run the openj9 container I would just replace the container name from the default in the build command with the new one and keep all the port allocates and everything the same?

@mbentley
Copy link
Owner

mbentley commented May 1, 2024

I am pretty new to docker, to run the openj9 container I would just replace the container name from the default in the build command with the new one and keep all the port allocates and everything the same?

Correct. And to be clear, I am manually building this image right now so it's not getting automatic updates at the moment. I don't expect there to be issues but just FYI. Make sure you're taking regular backups of your persistent data.

@eblieb
Copy link

eblieb commented May 1, 2024 via email

@pduchnovsky
Copy link

pduchnovsky commented Oct 1, 2024

I did not expect such an improvement, but here we are, it seems to be running at half of memory than previously (1GB vs 2GB)

Good idea @paulschmeida and good job @mbentley,

Would love to see this also for 5.14 :)
Actually, would love to have this as regular tag and automated builds for it but I understand it takes time :)

@mbentley
Copy link
Owner

mbentley commented Oct 1, 2024

So I just made a few changes which should make this able to be merged so that the install.sh script will use OpenJ9 if found (need to specify the correct base image that already has OpenJ9 as seen below in the example build command) or fall back to OpenJDK 17. Once I get this merged, I can start to do some auto builds of an OpenJ9 tag but I don't yet want to make it the default version as it seems like there are still optimizations that can be done with the shared classes cache.

Right now, JAVA_TOOL_OPTIONS is being set in the base image layer and at this point, I am pretty sure I just mostly coped that from here.

A quick build command that will work for now is:

docker build --pull --progress plain --build-arg BASE="mbentley/openj9:17" --build-arg INSTALL_VER=5.14.26.1 --platform linux/amd64 -t mbentley/omada-controller:test-openj9-amd64 -f Dockerfile.v5.x .

@eblieb
Copy link

eblieb commented Oct 1, 2024 via email

@mbentley
Copy link
Owner

mbentley commented Oct 1, 2024

So now that I just merged #479, I'm trying to determine what which images are worth building with OpenJ9 to be test images. Right now, I am thinking that the latest 5.14 and 5.14 beta might be worth doing but I know that the current 5.14 GA version has random issues documented in #418 with the app itself so maybe I could also include 5.13 on that. Thoughts?

@pduchnovsky
Copy link

pduchnovsky commented Oct 1, 2024

I agree with 5.13/5.14/5.14beta
I think this would cover most use-cases I think, due to 5.14 GA problems I actually am still running 5.13 as well.
If 5.14 beta fixes that then it would become my main for the time being I guess, the memory savings of OpenJ9 are insane :)

@forgenator
Copy link

I'm waiting on automated build on OpenJ9 and will move to use it immediatly it happens. I like to be on the bleeding edge, but I do like to automate things with watchdog, so once we have any automated builds, I'm gonna move to use it. Now running 5.13 since the 5.14 bug preventing me from updating, but once they release a new 5.14 that fixes it, will move to that.

@forgenator
Copy link

So I agree that 5.14/5.14/beta releases are enough for OpenJ9 in my opinnion! :D

@jinkazph
Copy link

jinkazph commented Oct 2, 2024

nice..

@mbentley
Copy link
Owner

mbentley commented Oct 2, 2024

OK, here are the autobuild tags that I just added. I still need to add something to the readme but this will work for now:

5.13

multi-arch

mbentley/omada-controller:5.13.30.8-openj9
mbentley/omada-controller:5.13-openj9

arch specific

mbentley/omada-controller:5.13.30.8-openj9-amd64
mbentley/omada-controller:5.13-openj9-amd64

mbentley/omada-controller:5.13.30.8-openj9-arm64
mbentley/omada-controller:5.13-openj9-arm64

with chromium

mbentley/omada-controller:5.13.30.8-openj9-chromium-amd64
mbentley/omada-controller:5.13.30.8-openj9-chromium
mbentley/omada-controller:5.13-openj9-chromium-amd64
mbentley/omada-controller:5.13-openj9-chromium

5.14

multi-arch

mbentley/omada-controller:5.14.26.1-openj9
mbentley/omada-controller:5.14-openj9

arch specific

mbentley/omada-controller:5.14.26.1-openj9-amd64
mbentley/omada-controller:5.14-openj9-amd64

mbentley/omada-controller:5.14.26.1-openj9-arm64
mbentley/omada-controller:5.14-openj9-arm64

with chromium

mbentley/omada-controller:5.14.26.1-openj9-chromium-amd64
mbentley/omada-controller:5.14.26.1-openj9-chromium
mbentley/omada-controller:5.14-openj9-chromium-amd64
mbentley/omada-controller:5.14-openj9-chromium

beta

multi-arch

mbentley/omada-controller:beta-5.14.32.2-openj9
mbentley/omada-controller:beta-5.14-openj9

arch specific

mbentley/omada-controller:beta-5.14.32.2-openj9-amd64
mbentley/omada-controller:beta-5.14-openj9-amd64

mbentley/omada-controller:beta-5.14.32.2-openj9-arm64
mbentley/omada-controller:beta-5.14-openj9-arm64

with chromium

mbentley/omada-controller:beta-5.14.32.2-openj9-chromium-amd64
mbentley/omada-controller:beta-5.14.32.2-openj9-chromium
mbentley/omada-controller:beta-5.14-openj9-chromium-amd64
mbentley/omada-controller:beta-5.14-openj9-chromium

@forgenator
Copy link

Just switched to openj9, and while it's still painfully slow to use, multisecond page loads etc, at least the memory footprint is lower, and yes, nothing went differently, i'm still using 5.13 so everything just worked. I have 3 devices, and about 20 clients.

OpenJDK

CONTAINER ID   NAME                        CPU %     MEM USAGE / LIMIT     MEM %     NET I/O           BLOCK I/O         PIDS
9cdb3e7bfd96   omada-controller            0.77%     1.386GiB / 7.864GiB   17.62%    12.2MB / 26.7MB   22.7MB / 277MB    314

Openj9

CONTAINER ID   NAME                        CPU %     MEM USAGE / LIMIT     MEM %     NET I/O           BLOCK I/O         PIDS
667ad53b7786   omada-controller            0.81%     867.4MiB / 7.864GiB   10.77%    856kB / 21.3MB    1.75MB / 1.2MB    353

Overall, really good update! Can't wait for 5.14, and hopefully we can get some additional speed improvements with shared cache implementations. I would help if i would know how. But I'm total newbie with java stuff.

I have this in my logs now, but

omada-controller  | JVMSHRC840E Failed to start up the shared cache.
omada-controller  | JVMSHRC686I Failed to startup shared class cache. Continue without using it as -Xshareclasses:nonfatal is specified

@mbentley
Copy link
Owner

mbentley commented Oct 2, 2024

The lack of shared cache is why this issue is still open - right now, there is no SCC being generated. I have a WIP hack based on the method I see here https://github.com/ibmruntimes/semeru-containers/blob/1212e4fe213cb5b4c65eb260ccbbc40a7eadfb5d/17/jre/ubuntu/focal/Dockerfile.open.releases.full#L69-L110 but I have yet to commit that to the openj9 branch as I haven't had an opportunity to really do much testing.

@pduchnovsky
Copy link

pduchnovsky commented Oct 2, 2024

I don't know, I run it (mbentley/omada-controller:beta-5.14-openj9) on Synology DS920+ with 8 GB of RAM together with 20 other containers, overall at 70% of memory and it's pretty snappy, didnt notice any longer loading times.
I got 8 devices and 38 clients

@forgenator
Copy link

I'm not sure if this is related to this, but I'm running my Omada on Rasp5 with 8GB of RAM, and for some reason every single page load takes multiple seconds, loading the organization front page takes about ~10-20 seconds. The UI is really sluggish. I don't really mind, since I'm only using it when I need to actually accomplish something, but it is weird. Maybe it's an ARM issue? I can make a separate issue to track this if it would be helpful and provide a lot more info.

@forgenator
Copy link

And I always thought it was just Omada issue since they have acknowledged that even their HW Controllers are slow for some: https://community.tp-link.com/en/business/forum/topic/639500

@mbentley
Copy link
Owner

mbentley commented Oct 3, 2024

I'm not sure if this is related to this, but I'm running my Omada on Rasp5 with 8GB of RAM, and for some reason every single page load takes multiple seconds, loading the organization front page takes about ~10-20 seconds. The UI is really sluggish. I don't really mind, since I'm only using it when I need to actually accomplish something, but it is weird. Maybe it's an ARM issue? I can make a separate issue to track this if it would be helpful and provide a lot more info.

I'm curious if you've done any benchmarking of your storage speed on your Raspberry Pi 5. I wouldn't think that the CPU or memory for a Pi would cause significant slowdowns and can only really think of storage speed maybe being the bottleneck outside of some potential bug in the software.

@forgenator
Copy link

I'm not sure if this is related to this, but I'm running my Omada on Rasp5 with 8GB of RAM, and for some reason every single page load takes multiple seconds, loading the organization front page takes about ~10-20 seconds. The UI is really sluggish. I don't really mind, since I'm only using it when I need to actually accomplish something, but it is weird. Maybe it's an ARM issue? I can make a separate issue to track this if it would be helpful and provide a lot more info.

I'm curious if you've done any benchmarking of your storage speed on your Raspberry Pi 5. I wouldn't think that the CPU or memory for a Pi would cause significant slowdowns and can only really think of storage speed maybe being the bottleneck outside of some potential bug in the software.

Yeah, I've been running nVME SSD for ~5 months now with these speeds: Write 544 MB/s, Read 842 MB/s. I too thought that nvme upgrade would have helped with omada running sluggish, but a'las, that didn't manifest.

@mbentley
Copy link
Owner

mbentley commented Oct 3, 2024

Well there goes that idea 😆

@mbentley
Copy link
Owner

mbentley commented Oct 3, 2024

This might be one of the most disgusting things I have done in a Dockerfile in a long time but I thought I would at least commit what I had so far on a RUN build step which is extremely hacky and probably not how I would actually do this. It seems like it populates the cache but I'll have to see as I only had time to see that it basically outputs a number and that's all. I never tested to see if it was actually using it a startup time.

@AndreKR
Copy link

AndreKR commented Nov 19, 2024

There is no 5.14-openj9test version, is that correct?
Currently I'm running 5.13-openj9test because for some reason 5.13 doesn't run for me.

@mbentley
Copy link
Owner

There is an 5.14-openj9 tag. I need to update the main README as I am regularly building 5.13, 5.14, and the beta version with OpenJ9.

@eblieb
Copy link

eblieb commented Nov 19, 2024 via email

@AndreKR
Copy link

AndreKR commented Nov 19, 2024

There is an 5.14-openj9 tag.

Hm, I thought I tried that, but apparently I didn't. Tried it now and it works*, thanks!

* After some issues with MongoDB being killed because I had a memory limit set. It seems 5.13-openj9test can run with 1 GB of memory while 5.13, 5.14 and 5.14-openj9 cannot.

@pduchnovsky
Copy link

pduchnovsky commented Nov 20, 2024

I currently run latest-openj9 image, with 1500m mem limit
Works fine at 1G mem usage
image

  oc:
    image: mbentley/omada-controller:latest-openj9
    container_name: oc
    ulimits:
      nofile:
        soft: 4096
        hard: 8192
    stop_grace_period: 60s
    network_mode: host
    environment:
      - PUID=508
      - PGID=508
      - TZ=Europe/Amsterdam
    labels:
      - traefik.enable=true
      - traefik.http.services.oc.loadbalancer.server.scheme=https
      - traefik.http.services.oc.loadbalancer.server.port=8043
      - traefik.http.routers.oc.rule=Host(`oc.${TRAEFIK_DOMAIN}`)
      - traefik.http.routers.oc.entrypoints=websecure
      - traefik.http.routers.oc.middlewares=internal@file
    volumes:
      - /volume1/docker/oc/data:/opt/tplink/EAPController/data
      - /volume1/docker/oc/logs:/opt/tplink/EAPController/logs
    healthcheck:
      disable: true
    restart: always
    mem_limit: 1500m
    memswap_limit: 1500m
    security_opt:
      - no-new-privileges:true

@PappyChappy1
Copy link

Is OpenJ9 still preferred if I'm running this on a powerful machine? I'd prefer better performance overall.

@mbentley
Copy link
Owner

Is OpenJ9 still preferred if I'm running this on a powerful machine? I'd prefer better performance overall.

I wouldn't really say there is any particular noticeable difference on a powerful machine but I haven't done extensive testing. At the moment, it seems to be more about memory usage when running where the OpenJ9 containers will use less.

@eblieb
Copy link

eblieb commented Dec 31, 2024

I will say I was using OpenJ9 for a while and switched back to just the default Beta (after using the OpenJ9 beta of 5.15.8.1) and the WebUI is much faster/snappy with the default Java build vs the OpenJ9. I think for the less RAM that OpenJ9 uses, the performance decrease isnt worth it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests