You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Merge pull request #43703 from metachris/glusterfsReadmeUpdate
Automatic merge from submit-queue
Fixed typos and issues in examples/volumes/glusterfs/README.md
**What this PR does / why we need it**:
This PR updates the GlusterFS `README.md` to fix several typos, outdated documentation and examples that did not work anymore.
**Which issue this PR fixes**
None
**Special notes for your reviewer**:
None
**Release note**:
`release-note-NONE`
Copy file name to clipboardExpand all lines: volumes/glusterfs/README.md
+35-36Lines changed: 35 additions & 36 deletions
Original file line number
Diff line number
Diff line change
@@ -1,34 +1,35 @@
1
-
## Glusterfs
1
+
## GlusterFS
2
2
3
-
[Glusterfs](http://www.gluster.org) is an open source scale-out filesystem. These examples provide information about how to allow containers use Glusterfs volumes.
3
+
[GlusterFS](http://www.gluster.org) is an open source scale-out filesystem. These examples provide information about how to allow containers use GlusterFS volumes.
4
4
5
-
The example assumes that you have already set up a Glusterfs server cluster and the Glusterfs client package is installed on all Kubernetes nodes.
5
+
The example assumes that you have already set up a GlusterFS server cluster and have a working GlusterFS volume ready to use in the containers.
6
6
7
7
### Prerequisites
8
8
9
-
Set up Glusterfs server cluster; install Glusterfs client package on the Kubernetes nodes. ([Guide](http://gluster.readthedocs.io/en/latest/Administrator%20Guide/))
9
+
* Set up a GlusterFS server cluster
10
+
* Create a GlusterFS volume
11
+
* If you are not using hyperkube, you may need to install the GlusterFS client package on the Kubernetes nodes ([Guide](http://gluster.readthedocs.io/en/latest/Administrator%20Guide/))
10
12
11
13
### Create endpoints
12
14
13
-
Here is a snippet of [glusterfs-endpoints.json](glusterfs-endpoints.json),
15
+
The first step is to create the GlusterFS endpoints definition in Kubernetes. Here is a snippet of [glusterfs-endpoints.json](glusterfs-endpoints.json):
14
16
15
17
```
16
-
"addresses": [
17
-
{
18
-
"IP": "10.240.106.152"
19
-
}
20
-
],
21
-
"ports": [
22
-
{
23
-
"port": 1
24
-
}
25
-
]
26
-
18
+
"subsets": [
19
+
{
20
+
"addresses": [{ "ip": "10.240.106.152" }],
21
+
"ports": [{ "port": 1 }]
22
+
},
23
+
{
24
+
"addresses": [{ "ip": "10.240.79.157" }],
25
+
"ports": [{ "port": 1 }]
26
+
}
27
+
]
27
28
```
28
29
29
-
The "IP" field should be filled with the address of a node in the Glusterfs server cluster. In this example, it is fine to give any valid value (from 1 to 65535) to the "port" field.
30
+
The `subsets` field should be populated with the addresses of the nodes in the GlusterFS cluster. It is fine to provide any valid value (from 1 to 65535) in the `port` field.
We need also create a service for this endpoints, so that the endpoints will be persistented. We will add this service without a selector to tell Kubernetes we want to add its endpoints manually. You can see [glusterfs-service.json](glusterfs-service.json) for details.
46
+
We also need to create a service for these endpoints, so that they will persist. We will add this service without a selector to tell Kubernetes we want to add its endpoints manually. You can see [glusterfs-service.json](glusterfs-service.json) for details.
The following *volume* spec in [glusterfs-pod.json](glusterfs-pod.json) illustrates a sample configuration.
57
+
The following *volume* spec in [glusterfs-pod.json](glusterfs-pod.json) illustrates a sample configuration:
57
58
58
59
```json
59
-
{
60
-
"name": "glusterfsvol",
61
-
"glusterfs": {
62
-
"endpoints": "glusterfs-cluster",
63
-
"path": "kube_vol",
64
-
"readOnly": true
60
+
"volumes": [
61
+
{
62
+
"name": "glusterfsvol",
63
+
"glusterfs": {
64
+
"endpoints": "glusterfs-cluster",
65
+
"path": "kube_vol",
66
+
"readOnly": true
65
67
}
66
-
}
68
+
}
69
+
]
67
70
```
68
71
69
72
The parameters are explained as the followings.
70
73
71
-
-**endpoints** is endpoints name that represents a Gluster cluster configuration. *kubelet* is optimized to avoid mount storm, it will randomly pick one from the endpoints to mount. If this host is unresponsive, the next Gluster host in the endpoints is automatically selected.
74
+
-**endpoints** is the name of the Endpoints object that represents a Gluster cluster configuration. *kubelet* is optimized to avoid mount storm, it will randomly pick one from the endpoints to mount. If this host is unresponsive, the next Gluster host in the endpoints is automatically selected.
72
75
-**path** is the Glusterfs volume name.
73
76
-**readOnly** is the boolean that sets the mountpoint readOnly or readWrite.
74
77
@@ -84,17 +87,13 @@ You can verify that the pod is running:
84
87
$ kubectl get pods
85
88
NAME READY STATUS RESTARTS AGE
86
89
glusterfs 1/1 Running 0 3m
87
-
88
-
$ kubectl get pods glusterfs -t '{{.status.hostIP}}{{"\n"}}'
89
-
10.240.169.172
90
90
```
91
91
92
-
You may ssh to the host (the hostIP) and run 'mount' to see if the Glusterfs volume is mounted,
92
+
You may execute the command `mount` inside the container to see if the GlusterFS volume is mounted correctly:
93
93
94
94
```sh
95
-
$ mount | grep kube_vol
96
-
10.240.106.152:kube_vol on /var/lib/kubelet/pods/f164a571-fa68-11e4-ad5c-42010af019b7/volumes/kubernetes.io~glusterfs/glusterfsvol type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
97
-
```
95
+
$ kubectl exec glusterfs -- mount | grep gluster
96
+
10.240.106.152:kube_vol on /mnt/glusterfs type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)```
98
97
99
98
You may also run `docker ps` on the host to see the actual container.
0 commit comments