Skip to content

Commit 1aa154c

Browse files
committed
add results & adjust docs+utils
1 parent bbd2b6f commit 1aa154c

File tree

20 files changed

+3096
-52
lines changed

20 files changed

+3096
-52
lines changed

README.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,8 +26,6 @@ This Repository is structured as follows:
2626

2727
## Reproduction of Results
2828

29-
We were not yet able to make all configurations in this repository independent of our AWS account and to upload our measurements as CSV files. We will improve this in the coming days.
30-
3129
All plots and intermediate results can be reproduced with the code and README descriptions in the microbenchmarks directories mentioned above.
3230
Our results are part of this repo as well for exploration and replotting. Make sure to [prepare the according tools](#plotting).
3331
To rerun the benchmarks make sure to follow the [requirements instructions](#requirements) below.

aws/.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1 +1,2 @@
11
venv
2+
awsenv

aws/README.md

Lines changed: 27 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,7 @@
11
# EC2 Provisioning scripts
22

3+
This page contains a short guide to setup the AWS specific tools and configurations in order to create and manage EC2 Instances and access S3 resources on AWS. We used the AWS SSO feature for authentication, towards which this guide is taylord. If you want or need to use another authentication method, some steps and provided utilities might not be compatible without manual adjustments.
4+
35
## Getting started
46

57
1) Install aws cli2:
@@ -18,18 +20,24 @@
1820
sudo installer -pkg AWSCLIV2.pkg -target /
1921
```
2022

21-
2) Configure AWS cli: `aws configure sso`
23+
1) Create an `awsenv` file in this directory (`{projectRoot}/aws/awsenv`). It is excluded from git and stores all private AWS specific configuration used by the [`awsrc`](./awsrc) and some other places throughout this project. The variables should be insertet line by line in the format `KEY="value"` and will be exported as environment variables via `source awsrc`. You don't need to fill them out now as they will be explained in the following steps along their respective AWS configurations. However as an overview, they will include:
24+
- your SSH key name (`AWS_SSH_KEY_NAME`)
25+
- your AWS SSO profile (`AWS_PROFILE`, `S3_PROFILE`)
26+
- the AWS subnet ID(s) of your private AWS cloud network(s) (`AWS_SUBNET_DEFAULT`, `AWS_SUBNET_ALL`)
27+
28+
1) Configure AWS cli: `aws configure sso`
2229
- As SSO start URL use <https://d-[your-project-id].awsapps.com/start/#>
2330
- As SSO region use [your-sso-region]
2431
- Confirm in web browser
2532
- choose your CLI default client region where you want to deploy your EC2 instances and store your results
2633
- choose your CLI profile name, e.g. nitro (preferrably something short and easy to remember, since you will need this in several other places)
27-
- set your `AWS_PROFILE` for use in the [`awsrc`](./awsrc)
28-
```bash
29-
export AWS_PROFILE=[your-sso-profile]
34+
- set your `AWS_PROFILE` for use in the [`awsrc`](./awsrc) and `S3_PROFILE` for downloading the results in the microbenchmarks in your [`awsenv`](./awsenv) file
35+
```shell
36+
AWS_PROFILE=[your-sso-profile]
37+
S3_PROFILE=$AWS_PROFILE
3038
```
3139
- Other options can be set as you want.
32-
3) Setup python venv with dependencies:
40+
1) Setup python venv with dependencies:
3341
Run the `setup_venv.sh` script or use manual steps:
3442

3543
```bash
@@ -38,21 +46,25 @@
3846
pip3 install -r requirements.txt
3947
```
4048

41-
4) Setup VPC (a default VPC should be created automatically in each region)
42-
5) Set AWS SSH Key pair in the EC2 console under `Network & Security > Key Pairs`. Importing existing key pairs is possible under Actions. Save the key name in the environment variable `AWS_SSH_KEY_NAME` in your local shell. This is required for the shell functions and aliases in `awsrc`
43-
6) Allow SSH access from the internet to instances you create:
49+
1) Setup VPC (a default VPC should be created automatically in each region). The IDs should be stored to the [`awsenv`](./awsenv) file and can be looked up in the AWS console in the browser or via `ec2laz`. Those settings specify where instances, created via the shell functions from [`awsrc`](./awsrc) will be started.
50+
```
51+
AWS_SUBNET_DEFAULT=[default-subnet]
52+
AWS_SUBNET_ALL="subnet-id-1 subnet-id-2 ..."
53+
```
54+
1) Set AWS SSH Key pair in the EC2 console under `Network & Security > Key Pairs`. Importing existing key pairs is possible under Actions. Save the key name as `AWS_SSH_KEY_NAME` in your [`awsenv`](./awsenv) file.
55+
1) Allow SSH access from the internet to instances you create:
4456
1) Go to Security Groups in the EC2 console: <https://[your-client-region].console.aws.amazon.com/ec2/home?region=[your-client-region]#SecurityGroups:>
4557
2) Select the existing default security group
4658
3) Go to `Inbound Rules > Edit inbound rules`
4759
4) Create a rule that allows SSH access from your IP/from the Internet
4860
5) Save
49-
7) Create an S3 access role that can be attached to EC2 instances:
61+
1) Create an S3 access role that can be attached to EC2 instances:
5062
1) Go to IAM/Roles <https://[your-client-region].console.aws.amazon.com/iam/home#/roles>
5163
2) Create role > AWS Service > EC2 > Next > tick AmazonS3FullAccess > Next > name the role "EC2-S3-access-role" > Create Role
52-
8) Create an S3 Bucket for benchmarking
53-
9) For some useful settings, aliases, and functions. Load `awsrc` in your shell with `source awsrc`.
54-
10) Try if ec2 instance creation works with: `ec2c c6i.2xlarge`
55-
11) Set up SSH key forwarding to easily clone this git repository onto the created machines. For example, add the following to your `.ssh/config`:
64+
1) Create an S3 Bucket for benchmarking, e.g. `nitro-enclaves-result-bucket`
65+
1) For some useful settings, aliases, and functions (re)load `awsrc` in your shell with `source awsrc`.
66+
1) Try if ec2 instance creation works with: `ec2c c6i.2xlarge`
67+
1) Set up SSH key forwarding to easily clone this git repository onto the created machines. For example, add the following to your `.ssh/config`:
5668
5769
```bash
5870
Host *.compute.amazonaws.com
@@ -63,5 +75,5 @@
6375
ForwardAgent yes
6476
```
6577
66-
12) Get the instance public DNS name with `ec2li`.
67-
13) Clone repository and install requirements on the new instance with `ec2setup INSTANCE_DNS` substitute INSTANCE_DNS with the result of the previous step.
78+
1) Get the instance public DNS name with `ec2li`.
79+
1) Clone repository and install requirements on the new instance with `ec2setup INSTANCE_DNS`. substitute INSTANCE_DNS with the result of the previous step.

aws/awsrc

Lines changed: 34 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,11 @@
1+
2+
# get the script directory
3+
AWSRC_DIR=$(readlink -f "$(dirname "$0")")
4+
export AWSRC_DIR="$AWSRC_DIR"
5+
set -a # Automatically export variables
6+
source "$AWSRC_DIR"/awsenv
7+
set +a # Disable automatic export
8+
19
# List instances
210
alias ec2li="aws --profile $AWS_PROFILE ec2 describe-instances --output table --query \"Reservations[*].Instances[*].{Instance:InstanceId,State:State.Name,PrivateIp:PrivateIpAddress,PublicDnsName:PublicDnsName,Subnet:SubnetId,InstanceType:InstanceType,LaunchTime:LaunchTime}\""
311

@@ -13,11 +21,11 @@ export AWS_PAGER=""
1321
# Create instance
1422
# Parameter 1 (required): instance type
1523
# Parameter 2 (default 1): count
16-
# Parameter 3 (default subnet-08b661064a57e5dbd): subnet, determines availability zone (default 2a)
24+
# Parameter 3 (default $AWS_SUBNET_DEFAULT): subnet, determines availability zone (default 2a)
1725
# Use ec2laz to list availability zones and connected subnets incase you need to start the VM in a different AZ than the default 2a.
1826
ec2c() {
1927
count=${2:-1}
20-
subnet=${3:-"subnet-08b661064a57e5dbd"}
28+
subnet=${3:-"$AWS_SUBNET_DEFAULT"}
2129
network_config="{\"SubnetId\":\"$subnet\",\"AssociatePublicIpAddress\":true,\"DeviceIndex\":0,\"Groups\":[\"sg-0772ae4b8cfc0e188\"]}"
2230
echo "$network_config"
2331
aws --profile $AWS_PROFILE ec2 run-instances \
@@ -27,15 +35,15 @@ ec2c() {
2735
--key-name "$AWS_SSH_KEY_NAME" \
2836
--enclave-options 'Enabled=true' \
2937
--network-interfaces=$network_config \
30-
--instance-market-options file://spot-options.json \
38+
--instance-market-options file://"$AWSRC_DIR"/spot-options.json \
3139
--block-device-mappings '[{"DeviceName":"/dev/xvda","Ebs":{"VolumeSize":16}}]' \
3240
--iam-instance-profile 'Name="EC2-S3-access-role"'
3341
}
3442

3543
# Create an instance with spot pricing and Graviton CPU
3644
ec2cg() {
3745
count=${2:-1}
38-
subnet=${3:-"subnet-08b661064a57e5dbd"}
46+
subnet=${3:-"$AWS_SUBNET_DEFAULT"}
3947
network_config="{\"SubnetId\":\"$subnet\",\"AssociatePublicIpAddress\":true,\"DeviceIndex\":0,\"Groups\":[\"sg-0772ae4b8cfc0e188\"]}"
4048
echo "$network_config"
4149
aws --profile $AWS_PROFILE ec2 run-instances \
@@ -45,7 +53,7 @@ ec2cg() {
4553
--key-name "$AWS_SSH_KEY_NAME" \
4654
--enclave-options 'Enabled=true' \
4755
--network-interfaces=$network_config \
48-
--instance-market-options file://spot-options.json \
56+
--instance-market-options file://"$AWSRC_DIR"/spot-options.json \
4957
--block-device-mappings '[{"DeviceName":"/dev/xvda","Ebs":{"VolumeSize":16}}]' \
5058
--iam-instance-profile 'Name="EC2-S3-access-role"'
5159
}
@@ -59,27 +67,37 @@ ec2ce() {
5967
--instance-type "$1" \
6068
--key-name "$AWS_SSH_KEY_NAME" \
6169
--enclave-options 'Enabled=true' \
62-
--network-interfaces '{"SubnetId":"subnet-08b661064a57e5dbd","AssociatePublicIpAddress":true,"DeviceIndex":0,"Groups":["sg-0772ae4b8cfc0e188"]}' \
70+
--network-interfaces '{"SubnetId":"$AWS_SUBNET_DEFAULT","AssociatePublicIpAddress":true,"DeviceIndex":0,"Groups":["sg-0772ae4b8cfc0e188"]}' \
6371
--block-device-mappings '[{"DeviceName":"/dev/xvda","Ebs":{"VolumeSize":16}}]' \
6472
--iam-instance-profile 'Name="EC2-S3-access-role"'
6573
}
6674

67-
# Create one instance in each availability zone that we have a subnet for. Subnets are hard-coded.
75+
# Create Instance(s) in each availability zone in $AWS_SUBNET_ALL
76+
# Parameter 1 (required): instance type
77+
# Parameter 2 (default 1): count
78+
# Parameter 3 (default empty): g for Graviton CPU
6879
ec2caz() {
69-
subnets=("subnet-08b661064a57e5dbd" "subnet-05994991e88c444ed" "subnet-0f46f92778eebc592")
70-
for subnet in "${subnets[@]}";
80+
count=${2:-1}
81+
for subnet in $(echo "$AWS_SUBNET_ALL" | tr " " "\n");
7182
do
72-
if [[ "$2" == "1" ]]; then
73-
ec2cg "$1" 1 "$subnet"
83+
echo "starting $count instances on subnet: $subnet"
84+
if [[ "$3" == "g" ]]; then
85+
ec2cg "$1" "$count" "$subnet"
7486
else
75-
ec2c "$1" 1 "$subnet"
87+
ec2c "$1" "$count" "$subnet"
7688
fi
7789
done
7890
}
7991

8092
# Setup instance
8193
ec2setup() {
82-
ssh -o "StrictHostKeyChecking no" "$1" 'sudo dnf install git -y && ssh -T -o "StrictHostKeyChecking accept-new" [email protected] ; mkdir -p ~/AWSNitroBenchmark && cd ~/AWSNitroBenchmark && git clone [email protected]:DataManagementLab/nitro-enclaves-benchmarks.git . && cd aws && chmod +x setup_ec2.sh && ./setup_ec2.sh'
94+
ssh -o "StrictHostKeyChecking no" "$1" '
95+
sudo dnf install git -y &&
96+
ssh -T -o "StrictHostKeyChecking accept-new" [email protected] ;
97+
mkdir -p ~/AWSNitroBenchmark && cd ~/AWSNitroBenchmark &&
98+
git clone '"$(cd "$AWSRC_DIR" && git remote get-url origin)"' . &&
99+
cd aws && chmod +x setup_ec2.sh && ./setup_ec2.sh
100+
'
83101
}
84102

85103
# Terminate instance
@@ -94,11 +112,11 @@ s3sr() {
94112

95113
# Spot & Pricing Information
96114
# Parameter 1 (required): instance type(s)
97-
# Parameter 2 (default us-west-2): region
115+
# Parameter 2 (default derived from profile): region
98116
# Parameter 3 (only ec2sp, default 0): lookback hours
99117

100118
ec2sp() {
101-
region=${2:-"us-west-2"}
119+
region=${2:-"$(aws configure get region --profile $AWS_PROFILE)"}
102120
lookback_hours=${3:-0}
103121
start_time=$(date -u -d "-${lookback_hours} hours" +%FT%TZ 2>/dev/null || date -u -v-"${lookback_hours}"H +%FT%TZ)
104122
aws --profile $AWS_PROFILE ec2 describe-spot-price-history \
@@ -110,7 +128,7 @@ ec2sp() {
110128
}
111129

112130
ec2ssc() {
113-
region=${2:-"us-west-2"}
131+
region=${2:-"$(aws configure get region --profile $AWS_PROFILE)"}
114132
aws --profile $AWS_PROFILE ec2 get-spot-placement-scores \
115133
--instance-types "$1" \
116134
--target-capacity 2 \

microbenchmarks/SockLatency/plot/plot.R

Lines changed: 24 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -44,24 +44,32 @@ df <- lapply(csv_files, function(file) {
4444
return(NULL) # Skip invalid files
4545
}
4646
}) %>%
47-
bind_rows() %>%
48-
mutate_if(is.character, trimws) %>%
49-
mutate(
50-
architecture_id = case_when(
51-
scenario == "single_instance" & protocol == "inet" ~ 1,
52-
scenario == "single_instance" & protocol == "vsock" ~ 2,
53-
scenario == "single_instance_proxy" ~ 3,
54-
scenario == "cross_instance_host2host" ~ 4,
55-
scenario == "cross_instance_host2enclave" ~ 5,
56-
scenario == "cross_instance_proxy" ~ 6,
57-
TRUE ~ NA
58-
)
59-
)
47+
bind_rows()
6048

61-
# store the data
6249
agg_store_path <- normalizePath("../results/data/results.csv", mustWork = FALSE)
63-
write.csv(df, file = agg_store_path, row.names = FALSE)
64-
print(paste("Combined data stored to:", agg_store_path))
50+
if (nrow(df)) {
51+
# process combined data & cleanup
52+
df <- df %>%
53+
mutate_if(is.character, trimws) %>%
54+
mutate(
55+
architecture_id = case_when(
56+
scenario == "single_instance" & protocol == "inet" ~ 1,
57+
scenario == "single_instance" & protocol == "vsock" ~ 2,
58+
scenario == "single_instance_proxy" ~ 3,
59+
scenario == "cross_instance_host2host" ~ 4,
60+
scenario == "cross_instance_host2enclave" ~ 5,
61+
scenario == "cross_instance_proxy" ~ 6,
62+
TRUE ~ NA
63+
)
64+
)
65+
66+
# store the data
67+
write.csv(df, file = agg_store_path, row.names = FALSE)
68+
print(paste("Combined data stored to:", agg_store_path))
69+
} else {
70+
df <- read.csv(agg_store_path, stringsAsFactors = TRUE)
71+
print(paste("Results read from", agg_store_path))
72+
}
6573

6674
# all plots from the paper were generated via plot.py
6775
# for your own exploration you can use the following code

microbenchmarks/SockLatency/plot/plot.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ def plot_paper():
2626
& (df["architecture_id"] <= 5)
2727
]
2828

29-
df.to_csv(f"{DATA_DIR}/filtered.csv")
29+
df.to_csv(f"{DATA_DIR}/filtered.csv", index=False)
3030

3131
# Project to required columns
3232
x_axis = "Message Size [Byte]"

0 commit comments

Comments
 (0)