Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bitnami/openldap] Container not saved using podman #76655

Open
virtualproject opened this issue Jan 25, 2025 · 6 comments
Open

[bitnami/openldap] Container not saved using podman #76655

virtualproject opened this issue Jan 25, 2025 · 6 comments
Assignees
Labels
openldap tech-issues The user has a technical issue about an application triage Triage is needed

Comments

@virtualproject
Copy link

virtualproject commented Jan 25, 2025

Name and Version

docker.io/bitnami/openldap

What architecture are you using?

amd64

What steps will reproduce the bug?

After create instance using podman with no root rights, container can start but not saved

[ldap@/home/ldap ~]$ podman -v
podman version 4.9.4-dev
[ldap@/home/ldap ~]$ uname -a
Linux fr1slpsuf00699 4.18.0-553.6.1.el8.x86_64 #1 SMP Thu May 30 04:13:58 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux



[ldap@/home/ldap ~]$ ./02.instance.sh
INFO[0000] podman filtering at log level debug
DEBU[0000] Called run.PersistentPreRunE(podman run --log-level debug --detach --rm --name openldap -e LDAP_ADMIN_USERNAME=admin -e LDAP_ADMIN_PASSWORD=adminpassword -e LDAP_USERS=test,test1 -e LDAP_PASSWORDS=testpass,testpass -e LDAP_ROOT=dc=test,dc=com -e LDAP_ADMIN_DN=cn=admin,dc=test,dc=com -p 1389:1389 bitnami/openldap:latest)
DEBU[0000] Using conmon: "/usr/bin/conmon"
INFO[0000] Using sqlite as database backend
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /home/ldap/.local/share/containers/storage
DEBU[0000] Using run root /tmp/containers-user-1004/containers
DEBU[0000] Using static dir /home/ldap/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /tmp/podman-run-1004/libpod/tmp
DEBU[0000] Using volume path /home/ldap/.local/share/containers/storage/volumes
DEBU[0000] Using transient store: false
DEBU[0000] Not configuring container store
DEBU[0000] Initializing event backend file
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument
DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument
DEBU[0000] Configured OCI runtime crun initialization failed: no valid executable found for OCI runtime crun: invalid argument
DEBU[0000] Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument
DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument
DEBU[0000] Using OCI runtime "/usr/bin/runc"
INFO[0000] Setting parallel job count to 7
INFO[0000] podman filtering at log level debug
DEBU[0000] Called run.PersistentPreRunE(podman run --log-level debug --detach --rm --name openldap -e LDAP_ADMIN_USERNAME=admin -e LDAP_ADMIN_PASSWORD=adminpassword -e LDAP_USERS=test,test1 -e LDAP_PASSWORDS=testpass,testpass -e LDAP_ROOT=dc=test,dc=com -e LDAP_ADMIN_DN=cn=admin,dc=test,dc=com -p 1389:1389 bitnami/openldap:latest)
DEBU[0000] Using conmon: "/usr/bin/conmon"
INFO[0000] Using sqlite as database backend
DEBU[0000] Overriding run root "/tmp/podman-run-1004/containers" with "/tmp/containers-user-1004/containers" from database
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /home/ldap/.local/share/containers/storage
DEBU[0000] Using run root /tmp/containers-user-1004/containers
DEBU[0000] Using static dir /home/ldap/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /tmp/podman-run-1004/libpod/tmp
DEBU[0000] Using volume path /home/ldap/.local/share/containers/storage/volumes
DEBU[0000] Using transient store: false
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that metacopy is not being used
DEBU[0000] Cached value indicated that native-diff is usable
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false
DEBU[0000] Initializing event backend file
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument
DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument
DEBU[0000] Configured OCI runtime crun initialization failed: no valid executable found for OCI runtime crun: invalid argument
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument
DEBU[0000] Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument
DEBU[0000] Using OCI runtime "/usr/bin/runc"
INFO[0000] Setting parallel job count to 7
DEBU[0000] Failed to add podman to systemd sandbox cgroup: dbus: couldn't determine address of session bus
DEBU[0000] Successfully loaded 1 networks
DEBU[0000] Adding port mapping from 1389 to 1389 length 1 protocol ""
DEBU[0000] Pulling image bitnami/openldap:latest (policy: missing)
DEBU[0000] Looking up image "bitnami/openldap:latest" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] }
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf"
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/000-shortnames.conf"
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/001-rhel-shortnames.conf"
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/002-rhel-shortnames-overrides.conf"
DEBU[0000] Trying "docker.io/bitnami/openldap:latest" ...
DEBU[0000] parsed reference into "[overlay@/home/ldap/.local/share/containers/storage+/tmp/containers-user-1004/containers]@24e5e3afbd1b4d3aa643ff2e0f137921fc4ee56a5a637a3562d4db4918ad1543"
DEBU[0000] Found image "bitnami/openldap:latest" as "docker.io/bitnami/openldap:latest" in local containers storage
DEBU[0000] Found image "bitnami/openldap:latest" as "docker.io/bitnami/openldap:latest" in local containers storage ([overlay@/home/ldap/.local/share/containers/storage+/tmp/containers-user-1004/containers]@24e5e3afbd1b4d3aa643ff2e0f137921fc4ee56a5a637a3562d4db4918ad1543)
DEBU[0000] exporting opaque data as blob "sha256:24e5e3afbd1b4d3aa643ff2e0f137921fc4ee56a5a637a3562d4db4918ad1543"
DEBU[0000] Looking up image "docker.io/bitnami/openldap:latest" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] }
DEBU[0000] Trying "docker.io/bitnami/openldap:latest" ...
DEBU[0000] parsed reference into "[overlay@/home/ldap/.local/share/containers/storage+/tmp/containers-user-1004/containers]@24e5e3afbd1b4d3aa643ff2e0f137921fc4ee56a5a637a3562d4db4918ad1543"
DEBU[0000] Found image "docker.io/bitnami/openldap:latest" as "docker.io/bitnami/openldap:latest" in local containers storage
DEBU[0000] Found image "docker.io/bitnami/openldap:latest" as "docker.io/bitnami/openldap:latest" in local containers storage ([overlay@/home/ldap/.local/share/containers/storage+/tmp/containers-user-1004/containers]@24e5e3afbd1b4d3aa643ff2e0f137921fc4ee56a5a637a3562d4db4918ad1543)
DEBU[0000] exporting opaque data as blob "sha256:24e5e3afbd1b4d3aa643ff2e0f137921fc4ee56a5a637a3562d4db4918ad1543"
DEBU[0000] Looking up image "bitnami/openldap:latest" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] }
DEBU[0000] Trying "docker.io/bitnami/openldap:latest" ...
DEBU[0000] parsed reference into "[overlay@/home/ldap/.local/share/containers/storage+/tmp/containers-user-1004/containers]@24e5e3afbd1b4d3aa643ff2e0f137921fc4ee56a5a637a3562d4db4918ad1543"
DEBU[0000] Found image "bitnami/openldap:latest" as "docker.io/bitnami/openldap:latest" in local containers storage
DEBU[0000] Found image "bitnami/openldap:latest" as "docker.io/bitnami/openldap:latest" in local containers storage ([overlay@/home/ldap/.local/share/containers/storage+/tmp/containers-user-1004/containers]@24e5e3afbd1b4d3aa643ff2e0f137921fc4ee56a5a637a3562d4db4918ad1543)
DEBU[0000] exporting opaque data as blob "sha256:24e5e3afbd1b4d3aa643ff2e0f137921fc4ee56a5a637a3562d4db4918ad1543"
DEBU[0000] Inspecting image 24e5e3afbd1b4d3aa643ff2e0f137921fc4ee56a5a637a3562d4db4918ad1543
DEBU[0000] exporting opaque data as blob "sha256:24e5e3afbd1b4d3aa643ff2e0f137921fc4ee56a5a637a3562d4db4918ad1543"
DEBU[0000] exporting opaque data as blob "sha256:24e5e3afbd1b4d3aa643ff2e0f137921fc4ee56a5a637a3562d4db4918ad1543"
DEBU[0000] Inspecting image 24e5e3afbd1b4d3aa643ff2e0f137921fc4ee56a5a637a3562d4db4918ad1543
DEBU[0000] Inspecting image 24e5e3afbd1b4d3aa643ff2e0f137921fc4ee56a5a637a3562d4db4918ad1543
DEBU[0000] Inspecting image 24e5e3afbd1b4d3aa643ff2e0f137921fc4ee56a5a637a3562d4db4918ad1543
DEBU[0000] using systemd mode: false
DEBU[0000] setting container name openldap
DEBU[0000] No hostname set; container's hostname will default to runtime default
DEBU[0000] Loading seccomp profile from "/usr/share/containers/seccomp.json"
DEBU[0000] Allocated lock 0 for container 22c723317336c2d11dd787ab8dc7cf32e9a8e4af0cddb0f104d32b7585396d18
DEBU[0000] parsed reference into "[overlay@/home/ldap/.local/share/containers/storage+/tmp/containers-user-1004/containers]@24e5e3afbd1b4d3aa643ff2e0f137921fc4ee56a5a637a3562d4db4918ad1543"
DEBU[0000] exporting opaque data as blob "sha256:24e5e3afbd1b4d3aa643ff2e0f137921fc4ee56a5a637a3562d4db4918ad1543"
DEBU[0000] Cached value indicated that idmapped mounts for overlay are not supported
DEBU[0000] Check for idmapped mounts support
DEBU[0000] Created container "22c723317336c2d11dd787ab8dc7cf32e9a8e4af0cddb0f104d32b7585396d18"
DEBU[0000] Container "22c723317336c2d11dd787ab8dc7cf32e9a8e4af0cddb0f104d32b7585396d18" has work directory "/home/ldap/.local/share/containers/storage/overlay-containers/22c723317336c2d11dd787ab8dc7cf32e9a8e4af0cddb0f104d32b7585396d18/userdata"
DEBU[0000] Container "22c723317336c2d11dd787ab8dc7cf32e9a8e4af0cddb0f104d32b7585396d18" has run directory "/tmp/containers-user-1004/containers/overlay-containers/22c723317336c2d11dd787ab8dc7cf32e9a8e4af0cddb0f104d32b7585396d18/userdata"
DEBU[0000] Cached value indicated that volatile is being used
DEBU[0000] overlay: mount_data=lowerdir=/home/ldap/.local/share/containers/storage/overlay/l/S73W6OVVHWZOXF6BCSCRRKYN34,upperdir=/home/ldap/.local/share/containers/storage/overlay/643fdf216b448e1ab8b16dfa380b8a6fe02db8b8a4f62387d4439fa7b2b73d4a/diff,workdir=/home/ldap/.local/share/containers/storage/overlay/643fdf216b448e1ab8b16dfa380b8a6fe02db8b8a4f62387d4439fa7b2b73d4a/work,userxattr,volatile,context="system_u:object_r:container_file_t:s0:c78,c809"
DEBU[0000] Made network namespace at /tmp/podman-run-1004/netns/netns-d54b0ad9-2ba3-c455-d7f1-5bf10a2a937d for container 22c723317336c2d11dd787ab8dc7cf32e9a8e4af0cddb0f104d32b7585396d18
DEBU[0000] slirp4netns command: /usr/bin/slirp4netns --disable-host-loopback --mtu=65520 --enable-sandbox --enable-seccomp --enable-ipv6 -c -r 3 -e 4 --netns-type=path /tmp/podman-run-1004/netns/netns-d54b0ad9-2ba3-c455-d7f1-5bf10a2a937d tap0
DEBU[0000] Mounted container "22c723317336c2d11dd787ab8dc7cf32e9a8e4af0cddb0f104d32b7585396d18" at "/home/ldap/.local/share/containers/storage/overlay/643fdf216b448e1ab8b16dfa380b8a6fe02db8b8a4f62387d4439fa7b2b73d4a/merged"
DEBU[0000] Created root filesystem for container 22c723317336c2d11dd787ab8dc7cf32e9a8e4af0cddb0f104d32b7585396d18 at /home/ldap/.local/share/containers/storage/overlay/643fdf216b448e1ab8b16dfa380b8a6fe02db8b8a4f62387d4439fa7b2b73d4a/merged
DEBU[0000] rootlessport: time="2025-01-25T10:48:47+01:00" level=info msg="Starting parent driver"
time="2025-01-25T10:48:47+01:00" level=info msg="opaque=map[builtin.readypipepath:/tmp/podman-run-1004/libpod/tmp/rootlessport1956834929/.bp-ready.pipe builtin.socketpath:/tmp/podman-run-1004/libpod/tmp/rootlessport1956834929/.bp.sock]"
DEBU[0000] rootlessport: time="2025-01-25T10:48:47+01:00" level=info msg="Starting child driver in child netns (\"/proc/self/exe\" [rootlessport-child])"
DEBU[0000] rootlessport: time="2025-01-25T10:48:47+01:00" level=info msg="Waiting for initComplete"
DEBU[0000] rootlessport: time="2025-01-25T10:48:47+01:00" level=info msg="initComplete is closed; parent and child established the communication channel"
time="2025-01-25T10:48:47+01:00" level=info msg="Exposing ports [{ 1389 1389 1 tcp}]"
DEBU[0000] rootlessport: time="2025-01-25T10:48:47+01:00" level=info msg=Ready
DEBU[0000] rootlessport is ready
DEBU[0000] Modifying container 22c723317336c2d11dd787ab8dc7cf32e9a8e4af0cddb0f104d32b7585396d18 /etc/passwd
DEBU[0000] Modifying container 22c723317336c2d11dd787ab8dc7cf32e9a8e4af0cddb0f104d32b7585396d18 /etc/group
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode subscription
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d
DEBU[0000] Workdir "/" resolved to host path "/home/ldap/.local/share/containers/storage/overlay/643fdf216b448e1ab8b16dfa380b8a6fe02db8b8a4f62387d4439fa7b2b73d4a/merged"
DEBU[0000] Created OCI spec for container 22c723317336c2d11dd787ab8dc7cf32e9a8e4af0cddb0f104d32b7585396d18 at /home/ldap/.local/share/containers/storage/overlay-containers/22c723317336c2d11dd787ab8dc7cf32e9a8e4af0cddb0f104d32b7585396d18/userdata/config.json
DEBU[0000] /usr/bin/conmon messages will be logged to syslog
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -c 22c723317336c2d11dd787ab8dc7cf32e9a8e4af0cddb0f104d32b7585396d18 -u 22c723317336c2d11dd787ab8dc7cf32e9a8e4af0cddb0f104d32b7585396d18 -r /usr/bin/runc -b /home/ldap/.local/share/containers/storage/overlay-containers/22c723317336c2d11dd787ab8dc7cf32e9a8e4af0cddb0f104d32b7585396d18/userdata -p /tmp/containers-user-1004/containers/overlay-containers/22c723317336c2d11dd787ab8dc7cf32e9a8e4af0cddb0f104d32b7585396d18/userdata/pidfile -n openldap --exit-dir /tmp/podman-run-1004/libpod/tmp/exits --full-attach -l k8s-file:/home/ldap/.local/share/containers/storage/overlay-containers/22c723317336c2d11dd787ab8dc7cf32e9a8e4af0cddb0f104d32b7585396d18/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /tmp/containers-user-1004/containers/overlay-containers/22c723317336c2d11dd787ab8dc7cf32e9a8e4af0cddb0f104d32b7585396d18/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/ldap/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /tmp/containers-user-1004/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /tmp/podman-run-1004/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg  --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /home/ldap/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg 22c723317336c2d11dd787ab8dc7cf32e9a8e4af0cddb0f104d32b7585396d18]"
INFO[0000] Failed to add conmon to cgroupfs sandbox cgroup: creating cgroup for cpu: mkdir /sys/fs/cgroup/cpu/conmon: permission denied
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

DEBU[0000] Received: 2758
INFO[0000] Got Conmon PID as 2748
DEBU[0000] Created container 22c723317336c2d11dd787ab8dc7cf32e9a8e4af0cddb0f104d32b7585396d18 in OCI runtime
DEBU[0000] Starting container 22c723317336c2d11dd787ab8dc7cf32e9a8e4af0cddb0f104d32b7585396d18 with command [/opt/bitnami/scripts/openldap/entrypoint.sh /opt/bitnami/scripts/openldap/run.sh]
DEBU[0000] Started container 22c723317336c2d11dd787ab8dc7cf32e9a8e4af0cddb0f104d32b7585396d18
DEBU[0000] Notify sent successfully
22c723317336c2d11dd787ab8dc7cf32e9a8e4af0cddb0f104d32b7585396d18
DEBU[0000] Called run.PersistentPostRunE(podman run --log-level debug --detach --rm --name openldap -e LDAP_ADMIN_USERNAME=admin -e LDAP_ADMIN_PASSWORD=adminpassword -e LDAP_USERS=test,test1 -e LDAP_PASSWORDS=testpass,testpass -e LDAP_ROOT=dc=test,dc=com -e LDAP_ADMIN_DN=cn=admin,dc=test,dc=com -p 1389:1389 bitnami/openldap:latest)
DEBU[0000] Shutting down engines



[ldap@/home/ldap ~]$ podman ps
CONTAINER ID  IMAGE                              COMMAND               CREATED         STATUS         PORTS                   NAMES
22c723317336  docker.io/bitnami/openldap:latest  /opt/bitnami/scri...  10 seconds ago  Up 11 seconds  0.0.0.0:1389->1389/tcp  openldap

[ldap@/home/ldap ~]$ podman stop openldap
openldap

[ldap@/home/ldap ~]$ podman ps
CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES

[ldap@/home/ldap ~]$ podman ps -a
CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES


[ldap@/home/ldap ~]$ podman start openldap
Error: no container with name or ID "openldap" found: no such container

What is the expected behavior?

[ldap@/home/ldap ~]$ podman ps -a
CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES
see openldap container 

[ldap@/home/ldap ~]$ podman start openldap
Error: no container with name or ID "openldap" found: no such container

What do you see instead?

No container saved, not able ro restart

[ldap@/home/ldap ~]$ podman ps -a
CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES


[ldap@/home/ldap ~]$ podman start openldap
Error: no container with name or ID "openldap" found: no such container
@virtualproject virtualproject added the tech-issues The user has a technical issue about an application label Jan 25, 2025
@github-actions github-actions bot added the triage Triage is needed label Jan 25, 2025
@javsalgar javsalgar changed the title Container not saved using podman [bitnami/openldap] Container not saved using podman Jan 27, 2025
@javsalgar
Copy link
Contributor

Hi,

It seems to me that the issue is not related with the Bitnami OpenLDAP container but with podman usage itself. Did you try checking in the podman forums?

@virtualproject
Copy link
Author

Hi,

I normally use podman, for example with mssql, never seen this issue before with any images (always no root). I reported for this reason.

@javsalgar
Copy link
Contributor

Does it happen with other bitnami containers?

@virtualproject
Copy link
Author

Just tested using postgress, in this case no issues

[postgresql@/home/postgresql ~]$ podman run -e ALLOW_EMPTY_PASSWORD=yes --name postgresql bitnami/postgresql:latest &
[1] 86335
[postgresql@/home/postgresql ~]$ postgresql 11:27:07.71 INFO ==>
postgresql 11:27:07.71 INFO ==> Welcome to the Bitnami postgresql container
postgresql 11:27:07.71 INFO ==> Subscribe to project updates by watching https://github.com/bitnami/containers
postgresql 11:27:07.71 INFO ==> Did you know there are enterprise versions of the Bitnami catalog? For enhanced secure software supply chain features, unlimited pulls from Docker, LTS support, or application customization, see Bitnami Premium or Tanzu Application Catalog. See https://www.arrow.com/globalecs/na/vendors/bitnami/ for more information.
postgresql 11:27:07.71 INFO ==>
postgresql 11:27:07.72 INFO ==> ** Starting PostgreSQL setup **
postgresql 11:27:07.74 INFO ==> Validating settings in POSTGRESQL_* env vars..
postgresql 11:27:07.74 WARN ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
postgresql 11:27:07.75 INFO ==> Loading custom pre-init scripts...
postgresql 11:27:07.75 INFO ==> Initializing PostgreSQL database...
postgresql 11:27:07.77 INFO ==> pg_hba.conf file not detected. Generating it...
postgresql 11:27:07.77 INFO ==> Generating local authentication configuration
postgresql 11:27:08.63 INFO ==> Starting PostgreSQL in background...
postgresql 11:27:09.68 INFO ==> Changing password of postgres
postgresql 11:27:09.70 INFO ==> Configuring replication parameters
postgresql 11:27:09.72 INFO ==> Configuring synchronous_replication
postgresql 11:27:09.73 INFO ==> Configuring fsync
postgresql 11:27:09.75 INFO ==> Stopping PostgreSQL...
waiting for server to shut down.... done
server stopped
postgresql 11:27:09.85 INFO ==> Loading custom scripts...
postgresql 11:27:09.86 INFO ==> Enabling remote connections

postgresql 11:27:09.87 INFO ==> ** PostgreSQL setup finished! **
postgresql 11:27:09.89 INFO ==> ** Starting PostgreSQL **
2025-01-27 11:27:09.909 GMT [1] LOG: pgaudit extension initialized
2025-01-27 11:27:09.918 GMT [1] LOG: starting PostgreSQL 17.2 on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit
2025-01-27 11:27:09.918 GMT [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2025-01-27 11:27:09.918 GMT [1] LOG: listening on IPv6 address "::", port 5432
2025-01-27 11:27:09.920 GMT [1] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432"
2025-01-27 11:27:09.924 GMT [119] LOG: database system was shut down at 2025-01-27 11:27:09 GMT
2025-01-27 11:27:09.928 GMT [1] LOG: database system is ready to accept connections

[postgresql@/home/postgresql ~]$ podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
af74d1f30a87 docker.io/bitnami/postgresql:latest /opt/bitnami/scri... 12 seconds ago Up 12 seconds postgresql

[postgresql@/home/postgresql ~]$ podman stop postgresql
2025-01-27 11:27:33.702 GMT [1] LOG: received smart shutdown request
2025-01-27 11:27:33.708 GMT [1] LOG: background worker "logical replication launcher" (PID 122) exited with exit code 1
2025-01-27 11:27:33.708 GMT [117] LOG: shutting down
2025-01-27 11:27:33.716 GMT [117] LOG: checkpoint starting: shutdown immediate
2025-01-27 11:27:33.719 GMT [117] LOG: checkpoint complete: wrote 3 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.001 s, sync=0.001 s, total=0.012 s; sync files=2, longest=0.001 s, average=0.001 s; distance=0 kB, estimate=0 kB; lsn=0/14E4E50, redo lsn=0/14E4E50
2025-01-27 11:27:33.723 GMT [1] LOG: database system is shut down
postgresql

[postgresql@/home/postgresql ~]$
[postgresql@/home/postgresql ~]$ podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
af74d1f30a87 docker.io/bitnami/postgresql:latest /opt/bitnami/scri... 33 seconds ago Exited (0) 7 seconds ago postgresql

[postgresql@/home/postgresql ~]$ podman start postgresql
postgresql

[postgresql@/home/postgresql ~]$ podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
af74d1f30a87 docker.io/bitnami/postgresql:latest /opt/bitnami/scri... 46 seconds ago Up 4 seconds postgresql

@virtualproject
Copy link
Author

Hi,

I have found the issue, following instructions, --rm remove totally the container once exit. It should not be used

Many tx

podman run --detach --rm --name openldap
--network my-network
--env LDAP_ADMIN_USERNAME=admin
--env LDAP_ADMIN_PASSWORD=adminpassword
--env LDAP_USERS=customuser
--env LDAP_PASSWORDS=custompassword
--env LDAP_ROOT=dc=example,dc=org
--env LDAP_ADMIN_DN=cn=admin,dc=example,dc=org
bitnami/openldap:latest

@javsalgar
Copy link
Contributor

Thanks for letting us know!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
openldap tech-issues The user has a technical issue about an application triage Triage is needed
Projects
None yet
Development

No branches or pull requests

2 participants