You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Apr 13, 2023. It is now read-only.
* Add troubleshooting section in install.md. Handles invalid cloudformation template issue.
* Move Optional Configuration to bottom of README below Manual Installation. The Optional Configuration instructions is relevant when doing a Manual Installation
The Kibana server allows you to explore data inside your Elasticsearch instance through a web UI. This server is automatically created if 'stage' is set to `dev`.
129
-
130
-
Accessing the Kibana server requires you to set up a Cognito user. The installation script can help you set up a Cognito user, or you can do it manually through the AWS Cognito Console.
131
-
132
-
The installation script will print the URL to the Kibana server after setup completes. Navigate to this URL and enter your login credentials to access the Kibana server.
133
-
134
-
If you lose this URL, it can be found in the `INFO_OUTPUT.yml` file under the "ElasticsearchDomainKibanaEndpoint" entry.
135
-
136
-
##### Accessing Elasticsearch Kibana server
137
-
138
-
> NOTE: Kibana is only deployed in the default 'dev' stage; if you want Kibana set up in other stages, like 'production', please remove `Condition: isDev` from [elasticsearch.yaml](./cloudformation/elasticsearch.yaml)
139
-
140
-
The Kibana server allows you to explore data inside your Elasticsearch instance through a web UI.
141
-
142
-
In order to be able to access the Kibana server for your Elasticsearch Service Instance, you need to create and confirm a Cognito user. Run the below command or create a user from the Cognito console.
143
-
144
-
```sh
145
-
# Find ELASTIC_SEARCH_KIBANA_USER_POOL_APP_CLIENT_ID in the printout
After the Cognito user is created and confirmed you can now log in with the username and password, at the ELASTIC_SEARCH_DOMAIN_KIBANA_ENDPOINT (found with the `serverless info --verbose` command). **Note** Kibana will be empty at first and have no indices, they will be created once the FHIR server writes resources to the DynamoDB
183
-
184
-
#### DynamoDB table backups
185
-
186
-
Daily DynamoDB Table back-ups can be optionally deployed via an additional 'fhir-server-backups' stack. The installation script will deploy this stack automatically if indicated during installation.
187
-
188
-
The reason behind multiple stacks is that backup vaults can be deleted only if they are empty, and you can't delete a stack that includes backup vaults if they contain any recovery points. With separate stacks it is easier for you to operate.
189
-
190
-
These back-ups work by using tags. In the [serverless.yaml](./serverless.yaml) you can see ResourceDynamoDBTable has a `backup - daily` & `service - fhir` tag. Anything with these tags will be backed-up daily at 5:00 UTC.
191
-
192
-
To deploy the stack and start daily backups (outside of the install script):
Audit Logs are placed into CloudWatch Logs at <CLOUDWATCH_EXECUTION_LOG_GROUP>. The Audit Logs includes information about request/responses coming to/from your API Gateway. It also includes the Cognito user that made the request.
203
-
204
-
In addition, if you would like to archive logs older than 7 days into S3 and delete those logs from Cloudwatch Logs, please follow the instructions below.
##### Making requests to S3 buckets with added encryption policy
249
-
250
-
S3 bucket policies can only examine request headers. When we set the encryption parameters in the getSignedUrlPromise those parameters are added to the URL, not the HEADER. Therefore the bucket policy would block the request with encryption parameters in the URL. The workaround to add this bucket policy to the S3 bucket is have your client add the headers to the request as in the following example:
- Installation can fail if your computer already possesses an installation of Python 3 earlier than version 3.3.x.
@@ -406,20 +274,6 @@ From the command’s output note down the following data
406
274
- CLOUDWATCH_EXECUTION_LOG_GROUP
407
275
- from Stack Outputs: CloudwatchExecutionLogGroup:
408
276
409
-
### Deploying audit log mover
410
-
411
-
Audit Logs are placed into CloudWatch Logs at <CLOUDWATCH_EXECUTION_LOG_GROUP>. The Audit Logs includes information about request/responses coming to/from your API Gateway. It also includes the Cognito user that made the request.
412
-
413
-
In addition, if you would like to archive logs older than 7 days into S3 and delete those logs from Cloudwatch Logs, please follow the instructions below.
Initially, AWS Cognito is set up supporting OAuth2 requests in order to support authentication and authorization. When first created there will be no users. This step creates a `workshopuser` and assigns the user to the `practitioner` User Group.
These parameters can be found by checking the `INFO_OUTPUT.yml` file generated by the installation script, or by running the previously mentioned `serverless info --verbose` command.
320
+
321
+
### Optional installation configurations
322
+
323
+
#### Elasticsearch Kibana server
324
+
325
+
The Kibana server allows you to explore data inside your Elasticsearch instance through a web UI. This server is automatically created if 'stage' is set to `dev`.
326
+
327
+
Accessing the Kibana server requires you to set up a Cognito user. The installation script can help you set up a Cognito user, or you can do it manually through the AWS Cognito Console.
328
+
329
+
The installation script will print the URL to the Kibana server after setup completes. Navigate to this URL and enter your login credentials to access the Kibana server.
330
+
331
+
If you lose this URL, it can be found in the `INFO_OUTPUT.yml` file under the "ElasticsearchDomainKibanaEndpoint" entry.
332
+
333
+
##### Accessing Elasticsearch Kibana server
334
+
335
+
> NOTE: Kibana is only deployed in the default 'dev' stage; if you want Kibana set up in other stages, like 'production', please remove `Condition: isDev` from [elasticsearch.yaml](./cloudformation/elasticsearch.yaml)
336
+
337
+
The Kibana server allows you to explore data inside your Elasticsearch instance through a web UI.
338
+
339
+
In order to be able to access the Kibana server for your Elasticsearch Service Instance, you need to create and confirm a Cognito user. Run the below command or create a user from the Cognito console.
340
+
341
+
```sh
342
+
# Find ELASTIC_SEARCH_KIBANA_USER_POOL_APP_CLIENT_ID in the printout
After the Cognito user is created and confirmed you can now log in with the username and password, at the ELASTIC_SEARCH_DOMAIN_KIBANA_ENDPOINT (found with the `serverless info --verbose` command). **Note** Kibana will be empty at first and have no indices, they will be created once the FHIR server writes resources to the DynamoDB
380
+
381
+
#### DynamoDB table backups
382
+
383
+
Daily DynamoDB Table back-ups can be optionally deployed via an additional 'fhir-server-backups' stack. The installation script will deploy this stack automatically if indicated during installation.
384
+
385
+
The reason behind multiple stacks is that backup vaults can be deleted only if they are empty, and you can't delete a stack that includes backup vaults if they contain any recovery points. With separate stacks it is easier for you to operate.
386
+
387
+
These back-ups work by using tags. In the [serverless.yaml](./serverless.yaml) you can see ResourceDynamoDBTable has a `backup - daily` & `service - fhir` tag. Anything with these tags will be backed-up daily at 5:00 UTC.
388
+
389
+
To deploy the stack and start daily backups (outside of the install script):
Audit Logs are placed into CloudWatch Logs at <CLOUDWATCH_EXECUTION_LOG_GROUP>. The Audit Logs includes information about request/responses coming to/from your API Gateway. It also includes the Cognito user that made the request.
400
+
401
+
In addition, if you would like to archive logs older than 7 days into S3 and delete those logs from Cloudwatch Logs, please follow the instructions below.
##### Making requests to S3 buckets with added encryption policy
447
+
448
+
S3 bucket policies can only examine request headers. When we set the encryption parameters in the getSignedUrlPromise those parameters are added to the URL, not the HEADER. Therefore the bucket policy would block the request with encryption parameters in the URL. The workaround to add this bucket policy to the S3 bucket is have your client add the headers to the request as in the following example:
`An error occurred: DynamodbKMSKey - Exception=[class software.amazon.awssdk.services.kms.model.MalformedPolicyDocumentException] ErrorCode=[MalformedPolicyDocumentException], ErrorMessage=[Policy contains a statement with one or more invalid principals.]`
457
+
458
+
Then serverless has generated an invalid Cloudformation template.
459
+
1. Check that `serverless_config.json` has the correct `IAMUserArn`. You can get the arn by running `$(aws sts get-caller-identity --query "Arn" --output text)`
460
+
2. Go to your AWS account and delete the `fhir-service-<stage>` Cloudformation template if it exist.
461
+
3. Run `sudo ./scripts/install.sh` again
462
+
463
+
If you still get the same error after following the steps above, try removing the `fhir-works-on-aws-deployment` repository and downloading it again. Then proceed from step 2.
0 commit comments