You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I searched the issues and found no similar issues.
Component
Other
What happened + What you expected to happen
With fdedup or the coming noop transform for spark, I can not run the spark image to get a simple command line and instead get and error message and the image does not start
Reproduction script
cd transforms/univeral/fdedup
make image-spark
docker run -it --rm fdedup-spark
This is not specific to fdedup.
I get (do not be concerned about amd/arm mismatch as I can run other non-spark images with this same warning with success)...
WARNING: image platform (linux/amd64) does not match the expected platform (linux/arm64)
++ id -u
+ myuid=185
++ id -g
+ mygid=0
+ set +e
++ getent passwd 185
+ uidentry=spark:x:185:0::/opt/spark/work-dir:/bin/bash
+ set -e
+ '[' -z spark:x:185:0::/opt/spark/work-dir:/bin/bash ']'
+ '[' -z /opt/java/openjdk ']'
+ SPARK_CLASSPATH=':/opt/spark/jars/*'
+ env
+ grep SPARK_JAVA_OPT_
+ sort -t_ -k4 -n
+ sed 's/[^=]*=\(.*\)/\1/g'
++ command -v readarray
+ '[' readarray ']'
+ readarray -t SPARK_EXECUTOR_JAVA_OPTS
+ '[' -n '' ']'
+ '[' -z ']'
+ '[' -z ']'
+ '[' -n '' ']'
+ '[' -z ']'
+ '[' -z ']'
+ '[' -z x ']'
+ SPARK_CLASSPATH='/opt/spark/conf::/opt/spark/jars/*'
+ SPARK_CLASSPATH='/opt/spark/conf::/opt/spark/jars/*:/opt/spark/work-dir'
+ case "$1" in
+ echo 'Non-spark-on-k8s command provided, proceeding in pass-through mode...'
Non-spark-on-k8s command provided, proceeding in pass-through mode...
+ CMD=("$@")
+ exec /usr/bin/tini -s --
tini (tini version 0.19.0)
Usage: tini [OPTIONS] PROGRAM -- [ARGS] | --version
Execute a program under the supervision of a valid init process (tini)
Command line options:
--version: Show version and exit.
-h: Show this help message and exit.
-s: Register as a process subreaper (requires Linux >= 3.4).
-p SIGNAL: Trigger SIGNAL when parent dies, e.g. "-p SIGKILL".
-v: Generate more verbose output. Repeat up to 3 times.
-w: Print a warning when processes are getting reaped.
-g: Send signals to the child's process group.
-e EXIT_CODE: Remap EXIT_CODE (from 0 to 255) to 0.
-l: Show license and exit.
Environment variables:
TINI_SUBREAPER: Register as a process subreaper (requires Linux >= 3.4).
TINI_VERBOSITY: Set the verbosity level (default: 1).
TINI_KILL_PROCESS_GROUP: Send signals to the child's process group.
Search before asking
Component
Other
What happened + What you expected to happen
With fdedup or the coming noop transform for spark, I can not run the spark image to get a simple command line and instead get and error message and the image does not start
Reproduction script
cd transforms/univeral/fdedup make image-spark docker run -it --rm fdedup-spark
This is not specific to fdedup.
I get (do not be concerned about amd/arm mismatch as I can run other non-spark images with this same warning with success)...
Anything else
This seems related to M1, per this kubeflow/spark-operator#1735
OS
MacOS (limited support)
Python
3.11.x
Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: