Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[IA-4840] [DO NOT MERGE] Testing running jupyter server from docker container for ToA #4465

Open
wants to merge 60 commits into
base: develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
60 commits
Select commit Hold shift + click to select a range
4ee4a5d
run jupyter server from docker container in dsvm
LizBaldo Apr 22, 2024
251443e
point to new azure init script
LizBaldo Apr 22, 2024
6331285
retry command does not exist yet...
LizBaldo Apr 22, 2024
8c99d32
update path url
LizBaldo Apr 22, 2024
341f2f3
try to address command not found
LizBaldo Apr 22, 2024
c48163b
update script url
LizBaldo Apr 22, 2024
37cd343
there are no log functions either...
LizBaldo Apr 22, 2024
df50311
update url
LizBaldo Apr 22, 2024
2a1d3eb
mount persistent disk volume and make run-jupyter.sh command found
LizBaldo Apr 23, 2024
1718499
update path
LizBaldo Apr 23, 2024
178a241
fix path
LizBaldo Apr 23, 2024
39458a3
do not run as jupyter user
LizBaldo Apr 23, 2024
3299b5f
update path
LizBaldo Apr 23, 2024
801cc82
control jupyter-user uid to match the docker container
LizBaldo Apr 23, 2024
5e7e618
update path
LizBaldo Apr 23, 2024
7eb1f19
force the creation of the jupyter user home directory
LizBaldo Apr 23, 2024
07f1003
update path
LizBaldo Apr 23, 2024
ffa7990
no need to add to a user grooup
LizBaldo Apr 23, 2024
132f4e8
update path
LizBaldo Apr 23, 2024
490da5a
revert to original useradd commands
LizBaldo Apr 23, 2024
3944c4d
revert sudo when making pd directory
LizBaldo Apr 23, 2024
1bec438
update path
LizBaldo Apr 23, 2024
e6642ca
fix typo
LizBaldo Apr 23, 2024
7fa0556
update path
LizBaldo Apr 23, 2024
aebdb24
set up environment variables properly
LizBaldo Apr 24, 2024
14d3e02
update path
LizBaldo Apr 24, 2024
1e0d205
Merge branch 'develop' into IA-4840-toa-jupyter-docker
LizBaldo Apr 24, 2024
b434b90
mirror the google logic to mount PD and open permissions
LizBaldo Apr 24, 2024
c890157
update path
LizBaldo Apr 24, 2024
121fd9a
mount PD on to the original location...
LizBaldo Apr 24, 2024
5979b89
update path
LizBaldo Apr 24, 2024
e370bc6
publish the port not just exposing it
LizBaldo Apr 25, 2024
3cc6140
update path
LizBaldo Apr 25, 2024
b8d0dcd
create bridge network between containers
LizBaldo Apr 25, 2024
eb702a6
update path
LizBaldo Apr 25, 2024
6614fa9
tripple regex escape fun!
LizBaldo Apr 25, 2024
c88fa0d
clean up - I need it
LizBaldo Apr 25, 2024
f3e2a2a
update path
LizBaldo Apr 25, 2024
6174b98
revert to using host network
LizBaldo Apr 25, 2024
1ec5eab
update path
LizBaldo Apr 25, 2024
c37b678
mount jupyter user home to welder work folder
LizBaldo Apr 25, 2024
f23fae0
update path
LizBaldo Apr 25, 2024
a9072ab
give access to the working directory not just the pd directory
LizBaldo Apr 25, 2024
1822443
update path
LizBaldo Apr 25, 2024
10324be
clean up my sassy comment
LizBaldo Apr 25, 2024
ab54c67
update paths
LizBaldo Apr 25, 2024
b743f9b
add reboot command and fix shared PD volume between jupyter and welder
LizBaldo May 1, 2024
ddda60e
update paths
LizBaldo May 1, 2024
3275c4a
change notebook dir to correspond to the persistent disk and address …
LizBaldo May 1, 2024
c20c8bb
update paths
LizBaldo May 1, 2024
dfa7cc2
fix crontab command
LizBaldo May 1, 2024
a42b85d
update paths
LizBaldo May 1, 2024
0b8f8e6
triple escaping fun...
LizBaldo May 2, 2024
8947c4d
update paths
LizBaldo May 2, 2024
6a900ce
adding the cloud provider as an environment variable to the jupyter s…
LizBaldo May 31, 2024
edd5ff6
fix merge conflicts
LizBaldo Jun 3, 2024
03fa90d
update paths
LizBaldo Jun 3, 2024
45e370d
enable docker container to access GPUs
LizBaldo Jun 4, 2024
e97dcd7
update paths
LizBaldo Jun 4, 2024
f868925
Merge branch 'develop' into IA-4840-toa-jupyter-docker
LizBaldo Jun 5, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
264 changes: 120 additions & 144 deletions http/src/main/resources/init-resources/azure_vm_init_script.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,93 +8,16 @@ set -e
# 'debconf: unable to initialize frontend: Dialog'
export DEBIAN_FRONTEND=noninteractive

#create user to run jupyter
VM_JUP_USER=jupyter
##### JUPYTER USER SETUP #####
# Create the jupyter user that corresponds to the jupyter user in the jupyter container
VM_JUP_USER=jupyter-user
VM_JUP_USER_UID=1002

sudo useradd -m -c "Jupyter User" $VM_JUP_USER
sudo useradd -m -c "Jupyter User" -u $VM_JUP_USER_UID $VM_JUP_USER
sudo usermod -a -G $VM_JUP_USER,adm,dialout,cdrom,floppy,audio,dip,video,plugdev,lxd,netdev $VM_JUP_USER

## Change ownership for the new user

sudo chgrp $VM_JUP_USER /anaconda/bin/*

sudo chown $VM_JUP_USER /anaconda/bin/*

sudo chgrp $VM_JUP_USER /anaconda/envs/py38_default/bin/*

sudo chown $VM_JUP_USER /anaconda/envs/py38_default/bin/*

sudo systemctl disable --now jupyterhub.service


# Formatting and mounting persistent disk
WORK_DIRECTORY="/home/$VM_JUP_USER/persistent_disk"
## Create the PD working directory
mkdir -p ${WORK_DIRECTORY}

## The PD should be the only `sd` disk that is not mounted yet
AllsdDisks=($(lsblk --nodeps --noheadings --output NAME --paths | grep -i "sd"))
FreesdDisks=()
for Disk in "${AllsdDisks[@]}"; do
Mounts="$(lsblk -no MOUNTPOINT "${Disk}")"
if [ -z "$Mounts" ]; then
echo "Found our unmounted persistent disk!"
FreesdDisks="${Disk}"
else
echo "Not our persistent disk!"
fi
done
DISK_DEVICE_PATH=${FreesdDisks}

## Only format disk is it hasn't already been formatted
## It the disk has previously been in use, then it should have a partition that we can mount
EXIT_CODE=0
lsblk -no NAME --paths "${DISK_DEVICE_PATH}1" || EXIT_CODE=$?
if [ $EXIT_CODE -eq 0 ]; then
## From https://learn.microsoft.com/en-us/azure/virtual-machines/linux/attach-disk-portal?tabs=ubuntu
## Use the partprobe utility to make sure the kernel is aware of the new partition and filesystem.
## Failure to use partprobe can cause the blkid or lslbk commands to not return the UUID for the new filesystem immediately.
sudo partprobe "${DISK_DEVICE_PATH}1"
# There is a pre-existing partition that we should try to directly mount
sudo mount -t ext4 "${DISK_DEVICE_PATH}1" ${WORK_DIRECTORY}
echo "Existing PD successfully remounted"
else
## Create one partition on the PD
(
echo o #create a new empty DOS partition table
echo n #add a new partition
echo p #print the partition table
echo
echo
echo
echo w #write table to disk and exit
) | sudo fdisk ${DISK_DEVICE_PATH}
echo "successful partitioning"
## Format the partition
# It's likely that the persistent disk was previously mounted on another VM and wasn't properly unmounted
# Passing -F -F to mkfs ext4 forces the tool to ignore the state of the partition.
# Note that there should be two instances command-line switch (-F -F) to override this check
echo y | sudo mkfs.ext4 "${DISK_DEVICE_PATH}1" -F -F
echo "successful formatting"
## From https://learn.microsoft.com/en-us/azure/virtual-machines/linux/attach-disk-portal?tabs=ubuntu
## Use the partprobe utility to make sure the kernel is aware of the new partition and filesystem.
## Failure to use partprobe can cause the blkid or lslbk commands to not return the UUID for the new filesystem immediately.
sudo partprobe "${DISK_DEVICE_PATH}1"
## Mount the PD partition to the working directory
sudo mount -t ext4 "${DISK_DEVICE_PATH}1" ${WORK_DIRECTORY}
echo "successful mount"
fi

## Add the PD UUID to fstab to ensure that the drive is remounted automatically after a reboot
OUTPUT="$(lsblk -no UUID --paths "${DISK_DEVICE_PATH}1")"
echo "UUID="$OUTPUT" ${WORK_DIRECTORY} ext4 defaults 0 1" | sudo tee -a /etc/fstab
echo "successful write of PD UUID to fstab"

## Change ownership of the mounted drive to the user
sudo chown -R $VM_JUP_USER:$VM_JUP_USER ${WORK_DIRECTORY}


# Read script arguments
##### READ SCRIPT ARGUMENT #####
# These are passed in setupCreateVmCreateMessage in the AzurePubsub Handler
LizBaldo marked this conversation as resolved.
Show resolved Hide resolved
echo $# arguments
if [ $# -ne 13 ];
then echo "illegal number of parameters"
Expand All @@ -119,14 +42,23 @@ WELDER_STAGING_BUCKET="${14:-dummy}"
WELDER_STAGING_STORAGE_CONTAINER_RESOURCE_ID="${15:-dummy}"

# Envs for Jupyter
JUPYTER_DOCKER_IMAGE="terradevacrpublic.azurecr.io/jupyter-server:test"
# NOTEBOOKS_DIR corresponds to the location INSIDE the jupyter docker container,
# and is not to be used withing the context of the DSVM itself
NOTEBOOKS_DIR="/home/$VM_JUP_USER/persistent_disk"
WORKSPACE_NAME="${16:-dummy}"
WORKSPACE_STORAGE_CONTAINER_URL="${17:-dummy}"

# Jupyter variables for listener
SERVER_APP_BASE_URL="/${RELAY_CONNECTION_NAME}/"
SERVER_APP_ALLOW_ORIGIN="*"
HCVAR='\$hc'
# We need to escape this $ character twice, once for the docker exec arg, and another time for passing it to run-jupyter.sh
HCVAR='\\\$hc'
SERVER_APP_WEBSOCKET_URL="wss://${RELAY_NAME}.servicebus.windows.net/${HCVAR}/${RELAY_CONNECTION_NAME}"
# We need to escape this $ character one extra time to pass it to the crontab for rebooting. The use of $hc in the websocket URL is
# something that we should rethink as it creates a lot of complexity downstream
REBOOT_HCVAR='\\\\\\\$hc'
REBOOT_SERVER_APP_WEBSOCKET_URL="wss://${RELAY_NAME}.servicebus.windows.net/${REBOOT_HCVAR}/${RELAY_CONNECTION_NAME}"
SERVER_APP_WEBSOCKET_HOST="${RELAY_NAME}.servicebus.windows.net"

# Relay listener configuration
Expand Down Expand Up @@ -168,57 +100,115 @@ echo "RUNTIME_NAME = ${RUNTIME_NAME}"
echo "VALID_HOSTS = ${VALID_HOSTS}"
echo "R-VERSION = ${R_VERSION}"

# Wait for lock to resolve before any installs, to resolve this error: https://broadworkbench.atlassian.net/browse/IA-4645

while sudo fuser /var/lib/dpkg/lock-frontend > /dev/null 2>&1
do
echo "Waiting to get lock /var/lib/dpkg/lock-frontend..."
sleep 5
done

# Install updated R version
echo "Installing R version ${R_VERSION}"
# Add the CRAN repository to the sources list
echo "deb https://cloud.r-project.org/bin/linux/ubuntu focal-cran40/" | sudo tee /etc/apt/sources.list -a
# Update package list
sudo apt-get update
# Install new R version
sudo apt-get install --no-install-recommends -y r-base=${R_VERSION}

#Update kernel list

echo "Y"| /anaconda/bin/jupyter kernelspec remove sparkkernel

echo "Y"| /anaconda/bin/jupyter kernelspec remove sparkrkernel

echo "Y"| /anaconda/bin/jupyter kernelspec remove pysparkkernel

echo "Y"| /anaconda/bin/jupyter kernelspec remove spark-3-python
##### Persistent Disk (PD) MOUNTING #####
# Formatting and mounting persistent disk
# Note that we cannot mount in /mnt/disks/work as it is a temporary disk on the DSVM!
PD_DIRECTORY="/home/$VM_JUP_USER/persistent_disk"
## Create the working and persistent disk directories
mkdir -p ${PD_DIRECTORY}

#echo "Y"| /anaconda/bin/jupyter kernelspec remove julia-1.6
## The PD should be the only `sd` disk that is not mounted yet
AllsdDisks=($(lsblk --nodeps --noheadings --output NAME --paths | grep -i "sd"))
FreesdDisks=()
for Disk in "${AllsdDisks[@]}"; do
Mounts="$(lsblk -no MOUNTPOINT "${Disk}")"
if [ -z "$Mounts" ]; then
echo "Found our unmounted persistent disk!"
FreesdDisks="${Disk}"
else
echo "Not our persistent disk!"
fi
done
DISK_DEVICE_PATH=${FreesdDisks}

echo "Y"| /anaconda/envs/py38_default/bin/pip3 install ipykernel pydevd
## Only format disk is it hasn't already been formatted
## It the disk has previously been in use, then it should have a partition that we can mount
EXIT_CODE=0
lsblk -no NAME --paths "${DISK_DEVICE_PATH}1" || EXIT_CODE=$?
if [ $EXIT_CODE -eq 0 ]; then
## From https://learn.microsoft.com/en-us/azure/virtual-machines/linux/attach-disk-portal?tabs=ubuntu
## Use the partprobe utility to make sure the kernel is aware of the new partition and filesystem.
## Failure to use partprobe can cause the blkid or lslbk commands to not return the UUID for the new filesystem immediately.
sudo partprobe "${DISK_DEVICE_PATH}1"
# There is a pre-existing partition that we should try to directly mount
sudo mount -t ext4 "${DISK_DEVICE_PATH}1" ${PD_DIRECTORY}
echo "Existing PD successfully remounted"
else
## Create one partition on the PD
(
echo o #create a new empty DOS partition table
echo n #add a new partition
echo p #print the partition table
echo
echo
echo
echo w #write table to disk and exit
) | sudo fdisk ${DISK_DEVICE_PATH}
echo "successful partitioning"
## Format the partition
echo y | sudo mkfs -t ext4 "${DISK_DEVICE_PATH}1"
echo "successful formatting"
## From https://learn.microsoft.com/en-us/azure/virtual-machines/linux/attach-disk-portal?tabs=ubuntu
## Use the partprobe utility to make sure the kernel is aware of the new partition and filesystem.
## Failure to use partprobe can cause the blkid or lslbk commands to not return the UUID for the new filesystem immediately.
sudo partprobe "${DISK_DEVICE_PATH}1"
## Mount the PD partition to the working directory
sudo mount -t ext4 "${DISK_DEVICE_PATH}1" ${PD_DIRECTORY}
echo "successful mount"
fi

echo "Y"| /anaconda/envs/py38_default/bin/python3 -m ipykernel install
## Add the PD UUID to fstab to ensure that the drive is remounted automatically after a reboot
OUTPUT="$(lsblk -no UUID --paths "${DISK_DEVICE_PATH}1")"
echo "UUID="$OUTPUT" ${PD_DIRECTORY} ext4 defaults 0 1" | sudo tee -a /etc/fstab
echo "successful write of PD UUID to fstab"

# Start Jupyter server with custom parameters
sudo runuser -l $VM_JUP_USER -c "mkdir -p /home/$VM_JUP_USER/.jupyter"
sudo runuser -l $VM_JUP_USER -c "wget -qP /home/$VM_JUP_USER/.jupyter https://raw.githubusercontent.com/DataBiosphere/leonardo/ea519ef899de28e27e2a37ba368433da9fd03b7f/http/src/main/resources/init-resources/jupyter_server_config.py"
# We pull the jupyter_delocalize.py file from the base terra-docker python image, but it was designed for notebooks and we need to make a couple of changes to make it work with server instead
sudo runuser -l $VM_JUP_USER -c "wget -qP /anaconda/lib/python3.10/site-packages https://raw.githubusercontent.com/DataBiosphere/terra-docker/0ea6d2ebd7fcae7072e01e1c2f2d178390a276b0/terra-jupyter-base/custom/jupyter_delocalize.py"
LizBaldo marked this conversation as resolved.
Show resolved Hide resolved
sudo runuser -l $VM_JUP_USER -c "sed -i 's/notebook.services/jupyter_server.services/g' /anaconda/lib/python3.10/site-packages/jupyter_delocalize.py"
sudo runuser -l $VM_JUP_USER -c "sed -i 's/http:\/\/welder:8080/http:\/\/127.0.0.1:8081/g' /anaconda/lib/python3.10/site-packages/jupyter_delocalize.py"
## Make sure that both the jupyter and welder users have access to the persistent disk on the VM
## This needs to happen before we start up containers
sudo chmod a+rwx ${PD_DIRECTORY}

echo "------ Jupyter ------"
##### JUPYTER SERVER #####
echo "------ Jupyter version: ${JUPYTER_DOCKER_IMAGE} ------"
echo "Starting Jupyter with command..."

echo "sudo runuser -l $VM_JUP_USER -c \"/anaconda/bin/jupyter server --ServerApp.base_url=$SERVER_APP_BASE_URL --ServerApp.websocket_url=$SERVER_APP_WEBSOCKET_URL --ServerApp.contents_manager_class=jupyter_delocalize.WelderContentsManager --autoreload &> /home/$VM_JUP_USER/jupyter.log\"" >/dev/null 2>&1&
echo "docker run -d --gpus all --restart always --network host --name jupyter \
--entrypoint tail \
--volume ${PD_DIRECTORY}:${NOTEBOOKS_DIR} \
-e CLOUD_PROVIDER=Azure \
-e WORKSPACE_ID=$WORKSPACE_ID \
-e WORKSPACE_NAME=$WORKSPACE_NAME \
-e WORKSPACE_STORAGE_CONTAINER_URL=$WORKSPACE_STORAGE_CONTAINER_URL \
-e STORAGE_CONTAINER_RESOURCE_ID=$WORKSPACE_STORAGE_CONTAINER_ID \
$JUPYTER_DOCKER_IMAGE \
-f /dev/null"

#Run docker container with Jupyter Server
#Override entrypoint with a placeholder (tail -f /dev/null) to keep the container running indefinitely.
#The jupyter server itself will be started via docker exec after.
#Mount the persistent disk directory to the jupyter notebook home directory
docker run -d --gpus all --restart always --network host --name jupyter \
--entrypoint tail \
--volume ${PD_DIRECTORY}:${NOTEBOOKS_DIR} \
--env CLOUD_PROVIDER=Azure \
--env WORKSPACE_ID=$WORKSPACE_ID \
--env WORKSPACE_NAME=$WORKSPACE_NAME \
--env WORKSPACE_STORAGE_CONTAINER_URL=$WORKSPACE_STORAGE_CONTAINER_URL \
--env STORAGE_CONTAINER_RESOURCE_ID=$WORKSPACE_STORAGE_CONTAINER_ID \
$JUPYTER_DOCKER_IMAGE \
-f /dev/null

echo 'Starting Jupyter Notebook...'
echo "docker exec -d jupyter /bin/bash -c '/usr/jupytervenv/run-jupyter.sh ${SERVER_APP_BASE_URL} ${SERVER_APP_WEBSOCKET_URL} ${NOTEBOOKS_DIR}'"
docker exec -d jupyter /bin/bash -c "/usr/jupytervenv/run-jupyter.sh ${SERVER_APP_BASE_URL} ${SERVER_APP_WEBSOCKET_URL} ${NOTEBOOKS_DIR}"

sudo runuser -l $VM_JUP_USER -c "/anaconda/bin/jupyter server --ServerApp.base_url=$SERVER_APP_BASE_URL --ServerApp.websocket_url=$SERVER_APP_WEBSOCKET_URL --ServerApp.contents_manager_class=jupyter_delocalize.WelderContentsManager --autoreload &> /home/$VM_JUP_USER/jupyter.log" >/dev/null 2>&1&
# Store Jupyter Server Docker exec command for reboot processes
# Cron does not play well with escaping backlashes so it is safer to run a script instead of the docker command directly
echo "docker exec -d jupyter /bin/bash -c '/usr/jupytervenv/run-jupyter.sh ${SERVER_APP_BASE_URL} ${REBOOT_SERVER_APP_WEBSOCKET_URL} ${NOTEBOOKS_DIR}'" | sudo tee /home/reboot_script.sh
sudo chmod +x /home/reboot_script.sh
sudo crontab -l 2>/dev/null| cat - <(echo "@reboot /home/reboot_script.sh") | crontab -

# Store Jupyter Server parameters for reboot processes
LizBaldo marked this conversation as resolved.
Show resolved Hide resolved
sudo crontab -l 2>/dev/null| cat - <(echo "@reboot sudo runuser -l $VM_JUP_USER -c '/anaconda/bin/jupyter server --ServerApp.base_url=$SERVER_APP_BASE_URL --ServerApp.websocket_url=$SERVER_APP_WEBSOCKET_URL --ServerApp.contents_manager_class=jupyter_delocalize.WelderContentsManager --autoreload &> /home/$VM_JUP_USER/jupyter.log' >/dev/null 2>&1&") | crontab -
echo "------ Jupyter done ------"

##### LISTENER #####
echo "------ Listener version: ${LISTENER_DOCKER_IMAGE} ------"
echo " Starting listener with command..."

Expand Down Expand Up @@ -265,11 +255,12 @@ $LISTENER_DOCKER_IMAGE

echo "------ Listener done ------"

##### WELDER #####
echo "------ Welder version: ${WELDER_WELDER_DOCKER_IMAGE} ------"
echo " Starting Welder with command...."

echo "docker run -d --restart always --network host --name welder \
--volume \"/home/${VM_JUP_USER}\":\"/work\" \
--volume "${PD_DIRECTORY}:/work" \
-e WSM_URL=$WELDER_WSM_URL \
-e PORT=8081 \
-e WORKSPACE_ID=$WORKSPACE_ID \
Expand All @@ -283,7 +274,7 @@ echo "docker run -d --restart always --network host --name welder \
$WELDER_WELDER_DOCKER_IMAGE"

docker run -d --restart always --network host --name welder \
--volume "/home/${VM_JUP_USER}":"/work" \
--volume "${PD_DIRECTORY}:/work" \
--env WSM_URL=$WELDER_WSM_URL \
--env PORT=8081 \
--env WORKSPACE_ID=$WORKSPACE_ID \
Expand All @@ -296,19 +287,4 @@ docker run -d --restart always --network host --name welder \
--env SHOULD_BACKGROUND_SYNC="false" \
$WELDER_WELDER_DOCKER_IMAGE

echo "------ Welder done ------"

# This next command creates a json file which contains the "env" variables to be added to the kernel.json files.
jq --null-input \
--arg workspace_id "${WORKSPACE_ID}" \
--arg workspace_storage_container_id "${WORKSPACE_STORAGE_CONTAINER_ID}" \
--arg workspace_name "${WORKSPACE_NAME}" \
--arg workspace_storage_container_url "${WORKSPACE_STORAGE_CONTAINER_URL}" \
'{ "env": { "WORKSPACE_ID": $workspace_id, "WORKSPACE_STORAGE_CONTAINER_ID": $workspace_storage_container_id, "WORKSPACE_NAME": $workspace_name, "WORKSPACE_STORAGE_CONTAINER_URL": $workspace_storage_container_url }}' \
> wsenv.json

# This next commands iterate through the available kernels, and uses jq to include the env variables from the previous step
/anaconda/bin/jupyter kernelspec list | awk 'NR>1 {print $2}' | while read line; do jq -s add $line"/kernel.json" wsenv.json > tmpkernel.json && mv tmpkernel.json $line"/kernel.json"; done
/anaconda/envs/py38_default/bin/jupyter kernelspec list | awk 'NR>1 {print $2}' | while read line; do jq -s add $line"/kernel.json" wsenv.json > tmpkernel.json && mv tmpkernel.json $line"/kernel.json"; done
/anaconda/envs/azureml_py38/bin/jupyter kernelspec list | awk 'NR>1 {print $2}' | while read line; do jq -s add $line"/kernel.json" wsenv.json > tmpkernel.json && mv tmpkernel.json $line"/kernel.json"; done
/anaconda/envs/azureml_py38_PT_and_TF/bin/jupyter kernelspec list | awk 'NR>1 {print $2}' | while read line; do jq -s add $line"/kernel.json" wsenv.json > tmpkernel.json && mv tmpkernel.json $line"/kernel.json"; done
echo "------ Welder done ------"
2 changes: 1 addition & 1 deletion http/src/main/resources/reference.conf
Original file line number Diff line number Diff line change
Expand Up @@ -251,7 +251,7 @@ azure {
type = "CustomScript",
version = "2.1",
minor-version-auto-upgrade = true,
file-uris = ["https://raw.githubusercontent.com/DataBiosphere/leonardo/788e53e22dab4f0cee6e7b7cdbfd271a0b43450d/http/src/main/resources/init-resources/azure_vm_init_script.sh"]
file-uris = ["https://raw.githubusercontent.com/DataBiosphere/leonardo/45e370d6475106eb63242f556ab4310a78d03653/http/src/main/resources/init-resources/azure_vm_init_script.sh"]
}
listener-image = "terradevacrpublic.azurecr.io/terra-azure-relay-listeners:76d982c"
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ class WsmCodecSpec extends AnyFlatSpec with Matchers {
| "minorVersionAutoUpgrade": true,
| "protectedSettings": [{
| "key": "fileUris",
| "value": ["https://raw.githubusercontent.com/DataBiosphere/leonardo/788e53e22dab4f0cee6e7b7cdbfd271a0b43450d/http/src/main/resources/init-resources/azure_vm_init_script.sh"]
| "value": ["https://raw.githubusercontent.com/DataBiosphere/leonardo/45e370d6475106eb63242f556ab4310a78d03653/http/src/main/resources/init-resources/azure_vm_init_script.sh"]
| },
| {
| "key": "commandToExecute",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ class ConfigReaderSpec extends AnyFlatSpec with Matchers {
"2.1",
true,
List(
"https://raw.githubusercontent.com/DataBiosphere/leonardo/788e53e22dab4f0cee6e7b7cdbfd271a0b43450d/http/src/main/resources/init-resources/azure_vm_init_script.sh"
"https://raw.githubusercontent.com/DataBiosphere/leonardo/45e370d6475106eb63242f556ab4310a78d03653/http/src/main/resources/init-resources/azure_vm_init_script.sh"
)
),
"terradevacrpublic.azurecr.io/terra-azure-relay-listeners:76d982c",
Expand Down
Loading