-
Kubernetes Version >= 1.20
-
If you are using a self managed cluster, ensure the flag
--allow-privileged=true
forkube-apiserver
. -
Important: If you intend to use the Volume Snapshot feature, the Kubernetes Volume Snapshot CRDs must be installed before the FSx for OpenZFS CSI driver. For installation instructions, see CSI Snapshotter Usage.
The driver requires IAM permissions to interact with the Amazon FSx for OpenZFS service to create/delete file systems, volumes, and snapshots on user's behalf. There are several methods to grant the driver IAM permissions:
- Using IAM roles for ServiceAccounts (Recommended) - Create a Kubernetes service account for the driver and attach the AmazonFSxFullAccess AWS-managed policy to it with the following command. If your cluster is in the AWS GovCloud Regions, then replace arn:aws: with arn:aws-us-gov. Likewise, if your cluster is in the AWS China Regions, replace arn:aws: with arn:aws-cn:
eksctl create iamserviceaccount \
--name fsx-openzfs-csi-controller-sa \
--namespace kube-system \
--cluster $cluster_name \
--attach-policy-arn arn:aws:iam::aws:policy/AmazonFSxFullAccess \
--approve \
--role-name AmazonEKSFSxOpenZFSCSIDriverFullAccess \
--region $region_code
- Using IAM instance profile - Create the following IAM policy and attach the policy to the instance profile IAM role of your cluster's worker nodes. See here for guidelines on how to access your EKS node IAM role.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iam:CreateServiceLinkedRole",
"iam:AttachRolePolicy",
"iam:PutRolePolicy"
],
"Resource": "arn:aws:iam::*:role/aws-service-role/fsx.amazonaws.com/*"
},
{
"Action":"iam:CreateServiceLinkedRole",
"Effect":"Allow",
"Resource":"*",
"Condition":{
"StringLike":{
"iam:AWSServiceName":[
"fsx.amazonaws.com"
]
}
}
},
{
"Effect": "Allow",
"Action": [
"fsx:CreateFileSystem",
"fsx:UpdateFileSystem",
"fsx:DeleteFileSystem",
"fsx:DescribeFileSystems",
"fsx:CreateVolume",
"fsx:DeleteVolume",
"fsx:DescribeVolumes",
"fsx:CreateSnapshot",
"fsx:DeleteSnapshot",
"fsx:DescribeSnapshots",
"fsx:TagResource",
"fsx:ListTagsForResource"
],
"Resource": ["*"]
}
]
}
By default, the driver controller tolerates taint CriticalAddonsOnly
and has tolerationSeconds
configured as 300
.
Additionally, the driver node tolerates all taints.
If you do not wish to deploy the driver node on all nodes, please set Helm Value.node.tolerateAllTaints
to false before deployment.
Add policies to Value.node.tolerations
to configure customized toleration for nodes.
There are potential race conditions on node startup (especially when a node is first joining the cluster)
where pods/processes that rely on the FSx for OpenZFS CSI Driver can act on a node before the FSx for OpenZFS CSI Driver is able to start up and become fully ready.
To combat this, the FSx for OpenZFS CSI Driver contains a feature to automatically remove a taint from the node on startup.
Users can taint their nodes when they join the cluster and/or on startup.
This will prevent other pods from running and/or being scheduled on the node prior to the FSx for OpenZFS CSI Driver becoming ready.
This feature is activated by default. Cluster administrators should apply the taint fsx.openzfs.csi.aws.com/agent-not-ready:NoExecute
to their nodes:
kubectl taint nodes $NODE_NAME fsx.openzfs.csi.aws.com/agent-not-ready:NoExecute
Note that any effect will work, but NoExecute
is recommended.
For example, EKS Managed Node Groups support automatically tainting nodes.
You may deploy the FSx for OpenZFS CSI driver via Kustomize or Helm
kubectl apply -k "github.com/kubernetes-sigs/aws-fsx-openzfs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-0.1"
Note: Using the master branch to deploy the driver is not supported as the master branch may contain upcoming features incompatible with the currently released stable version of the driver.
- Add the
aws-fsx-openzfs-csi-driver
Helm repository.
helm repo add aws-fsx-openzfs-csi-driver https://kubernetes-sigs.github.io/aws-fsx-openzfs-csi-driver
helm repo update
- Install the latest release of the driver.
helm upgrade --install aws-fsx-openzfs-csi-driver \
--namespace kube-system \
aws-fsx-openzfs-csi-driver/aws-fsx-openzfs-csi-driver
Review the configuration values for the Helm chart.
kubectl get pods -n kube-system -l app.kubernetes.io/name=aws-fsx-openzfs-csi-driver