Skip to content

Commit 024208e

Browse files
committed
Fix trailing whitespace in all docs
1 parent 3c95bd4 commit 024208e

File tree

81 files changed

+311
-311
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

81 files changed

+311
-311
lines changed

docs/admin/authorization.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ Documentation for other releases can be found at
3535

3636

3737
In Kubernetes, authorization happens as a separate step from authentication.
38-
See the [authentication documentation](authentication.md) for an
38+
See the [authentication documentation](authentication.md) for an
3939
overview of authentication.
4040

4141
Authorization applies to all HTTP accesses on the main (secure) apiserver port.
@@ -60,8 +60,8 @@ The following implementations are available, and are selected by flag:
6060
A request has 4 attributes that can be considered for authorization:
6161
- user (the user-string which a user was authenticated as).
6262
- whether the request is readonly (GETs are readonly)
63-
- what resource is being accessed
64-
- applies only to the API endpoints, such as
63+
- what resource is being accessed
64+
- applies only to the API endpoints, such as
6565
`/api/v1/namespaces/default/pods`. For miscellaneous endpoints, like `/version`, the
6666
resource is the empty string.
6767
- the namespace of the object being access, or the empty string if the
@@ -95,7 +95,7 @@ interface.
9595
A request has attributes which correspond to the properties of a policy object.
9696

9797
When a request is received, the attributes are determined. Unknown attributes
98-
are set to the zero value of its type (e.g. empty string, 0, false).
98+
are set to the zero value of its type (e.g. empty string, 0, false).
9999

100100
An unset property will match any value of the corresponding
101101
attribute. An unset attribute will match any value of the corresponding property.

docs/admin/cluster-troubleshooting.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ Documentation for other releases can be found at
3636
This doc is about cluster troubleshooting; we assume you have already ruled out your application as the root cause of the
3737
problem you are experiencing. See
3838
the [application troubleshooting guide](../user-guide/application-troubleshooting.md) for tips on application debugging.
39-
You may also visit [troubleshooting document](../troubleshooting.md) for more information.
39+
You may also visit [troubleshooting document](../troubleshooting.md) for more information.
4040

4141
## Listing your cluster
4242

@@ -73,7 +73,7 @@ This is an incomplete list of things that could go wrong, and how to adjust your
7373
Root causes:
7474
- VM(s) shutdown
7575
- Network partition within cluster, or between cluster and users
76-
- Crashes in Kubernetes software
76+
- Crashes in Kubernetes software
7777
- Data loss or unavailability of persistent storage (e.g. GCE PD or AWS EBS volume)
7878
- Operator error, e.g. misconfigured Kubernetes software or application software
7979

docs/admin/etcd.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ Documentation for other releases can be found at
3535

3636
[etcd](https://coreos.com/etcd/docs/2.0.12/) is a highly-available key value
3737
store which Kubernetes uses for persistent storage of all of its REST API
38-
objects.
38+
objects.
3939

4040
## Configuration: high-level goals
4141

docs/admin/high-availability.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@ to make sure that each automatically restarts when it fails. To achieve this, w
102102
the `kubelet` that we run on each of the worker nodes. This is convenient, since we can use containers to distribute our binaries, we can
103103
establish resource limits, and introspect the resource usage of each daemon. Of course, we also need something to monitor the kubelet
104104
itself (insert who watches the watcher jokes here). For Debian systems, we choose monit, but there are a number of alternate
105-
choices. For example, on systemd-based systems (e.g. RHEL, CentOS), you can run 'systemctl enable kubelet'.
105+
choices. For example, on systemd-based systems (e.g. RHEL, CentOS), you can run 'systemctl enable kubelet'.
106106

107107
If you are extending from a standard Kubernetes installation, the `kubelet` binary should already be present on your system. You can run
108108
`which kubelet` to determine if the binary is in fact installed. If it is not installed,

docs/admin/introduction.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,7 @@ project](salt.md).
9090

9191
## Multi-tenant support
9292

93-
* **Resource Quota** ([resource-quota.md](resource-quota.md))
93+
* **Resource Quota** ([resource-quota.md](resource-quota.md))
9494

9595
## Security
9696

docs/admin/multi-cluster.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -73,13 +73,13 @@ load and growth.
7373

7474
To pick the number of clusters, first, decide which regions you need to be in to have adequate latency to all your end users, for services that will run
7575
on Kubernetes (if you use a Content Distribution Network, the latency requirements for the CDN-hosted content need not
76-
be considered). Legal issues might influence this as well. For example, a company with a global customer base might decide to have clusters in US, EU, AP, and SA regions.
76+
be considered). Legal issues might influence this as well. For example, a company with a global customer base might decide to have clusters in US, EU, AP, and SA regions.
7777
Call the number of regions to be in `R`.
7878

7979
Second, decide how many clusters should be able to be unavailable at the same time, while still being available. Call
8080
the number that can be unavailable `U`. If you are not sure, then 1 is a fine choice.
8181

82-
If it is allowable for load-balancing to direct traffic to any region in the event of a cluster failure, then
82+
If it is allowable for load-balancing to direct traffic to any region in the event of a cluster failure, then
8383
you need `R + U` clusters. If it is not (e.g you want to ensure low latency for all users in the event of a
8484
cluster failure), then you need to have `R * U` clusters (`U` in each of `R` regions). In any case, try to put each cluster in a different zone.
8585

docs/admin/namespaces.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ Each user community has its own:
5252

5353
A cluster operator may create a Namespace for each unique user community.
5454

55-
The Namespace provides a unique scope for:
55+
The Namespace provides a unique scope for:
5656

5757
1. named resources (to avoid basic naming collisions)
5858
2. delegated management authority to trusted users

docs/admin/node.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -234,7 +234,7 @@ capacity when adding a node.
234234
The Kubernetes scheduler ensures that there are enough resources for all the pods on a node. It
235235
checks that the sum of the limits of containers on the node is no greater than than the node capacity. It
236236
includes all containers started by kubelet, but not containers started directly by docker, nor
237-
processes not in containers.
237+
processes not in containers.
238238

239239
If you want to explicitly reserve resources for non-Pod processes, you can create a placeholder
240240
pod. Use the following template:

docs/admin/resource-quota.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -160,14 +160,14 @@ Sometimes more complex policies may be desired, such as:
160160

161161
Such policies could be implemented using ResourceQuota as a building-block, by
162162
writing a 'controller' which watches the quota usage and adjusts the quota
163-
hard limits of each namespace according to other signals.
163+
hard limits of each namespace according to other signals.
164164

165165
Note that resource quota divides up aggregate cluster resources, but it creates no
166166
restrictions around nodes: pods from several namespaces may run on the same node.
167167

168168
## Example
169169

170-
See a [detailed example for how to use resource quota](../user-guide/resourcequota/).
170+
See a [detailed example for how to use resource quota](../user-guide/resourcequota/).
171171

172172
## Read More
173173

docs/admin/service-accounts-admin.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ for a number of reasons:
5656
- Auditing considerations for humans and service accounts may differ.
5757
- A config bundle for a complex system may include definition of various service
5858
accounts for components of that system. Because service accounts can be created
59-
ad-hoc and have namespaced names, such config is portable.
59+
ad-hoc and have namespaced names, such config is portable.
6060

6161
## Service account automation
6262

docs/api.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ What constitutes a compatible change and how to change the API are detailed by t
5555

5656
## API versioning
5757

58-
To make it easier to eliminate fields or restructure resource representations, Kubernetes supports multiple API versions, each at a different API path prefix, such as `/api/v1beta3`. These are simply different interfaces to read and/or modify the same underlying resources. In general, all API resources are accessible via all API versions, though there may be some cases in the future where that is not true.
58+
To make it easier to eliminate fields or restructure resource representations, Kubernetes supports multiple API versions, each at a different API path prefix, such as `/api/v1beta3`. These are simply different interfaces to read and/or modify the same underlying resources. In general, all API resources are accessible via all API versions, though there may be some cases in the future where that is not true.
5959

6060
We chose to version at the API level rather than at the resource or field level to ensure that the API presents a clear, consistent view of system resources and behavior, and to enable controlling access to end-of-lifed and/or experimental APIs.
6161

docs/design/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ Documentation for other releases can be found at
3333

3434
# Kubernetes Design Overview
3535

36-
Kubernetes is a system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications.
36+
Kubernetes is a system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications.
3737

3838
Kubernetes establishes robust declarative primitives for maintaining the desired state requested by the user. We see these primitives as the main value added by Kubernetes. Self-healing mechanisms, such as auto-restarting, re-scheduling, and replicating containers require active controllers, not just imperative orchestration.
3939

docs/design/admission_control_resource_quota.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -104,7 +104,7 @@ type ResourceQuotaList struct {
104104

105105
## AdmissionControl plugin: ResourceQuota
106106

107-
The **ResourceQuota** plug-in introspects all incoming admission requests.
107+
The **ResourceQuota** plug-in introspects all incoming admission requests.
108108

109109
It makes decisions by evaluating the incoming object against all defined **ResourceQuota.Status.Hard** resource limits in the request
110110
namespace. If acceptance of the resource would cause the total usage of a named resource to exceed its hard limit, the request is denied.
@@ -125,7 +125,7 @@ Any resource that is not part of core Kubernetes must follow the resource naming
125125
This means the resource must have a fully-qualified name (i.e. mycompany.org/shinynewresource)
126126

127127
If the incoming request does not cause the total usage to exceed any of the enumerated hard resource limits, the plug-in will post a
128-
**ResourceQuotaUsage** document to the server to atomically update the observed usage based on the previously read
128+
**ResourceQuotaUsage** document to the server to atomically update the observed usage based on the previously read
129129
**ResourceQuota.ResourceVersion**. This keeps incremental usage atomically consistent, but does introduce a bottleneck (intentionally)
130130
into the system.
131131

@@ -184,7 +184,7 @@ resourcequotas 1 1
184184
services 3 5
185185
```
186186

187-
## More information
187+
## More information
188188

189189
See [resource quota document](../admin/resource-quota.md) and the [example of Resource Quota](../user-guide/resourcequota/) for more information.
190190

docs/design/architecture.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ Each node runs Docker, of course. Docker takes care of the details of downloadi
4747

4848
### Kubelet
4949

50-
The **Kubelet** manages [pods](../user-guide/pods.md) and their containers, their images, their volumes, etc.
50+
The **Kubelet** manages [pods](../user-guide/pods.md) and their containers, their images, their volumes, etc.
5151

5252
### Kube-Proxy
5353

docs/design/event_compression.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ Event compression should be best effort (not guaranteed). Meaning, in the worst
4949
## Design
5050

5151
Instead of a single Timestamp, each event object [contains](http://releases.k8s.io/HEAD/pkg/api/types.go#L1111) the following fields:
52-
* `FirstTimestamp util.Time`
52+
* `FirstTimestamp util.Time`
5353
* The date/time of the first occurrence of the event.
5454
* `LastTimestamp util.Time`
5555
* The date/time of the most recent occurrence of the event.

docs/design/expansion.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,7 @@ available to subsequent expansions.
8787

8888
### Use Case: Variable expansion in command
8989

90-
Users frequently need to pass the values of environment variables to a container's command.
90+
Users frequently need to pass the values of environment variables to a container's command.
9191
Currently, Kubernetes does not perform any expansion of variables. The workaround is to invoke a
9292
shell in the container's command and have the shell perform the substitution, or to write a wrapper
9393
script that sets up the environment and runs the command. This has a number of drawbacks:
@@ -130,7 +130,7 @@ The exact syntax for variable expansion has a large impact on how users perceive
130130
feature. We considered implementing a very restrictive subset of the shell `${var}` syntax. This
131131
syntax is an attractive option on some level, because many people are familiar with it. However,
132132
this syntax also has a large number of lesser known features such as the ability to provide
133-
default values for unset variables, perform inline substitution, etc.
133+
default values for unset variables, perform inline substitution, etc.
134134

135135
In the interest of preventing conflation of the expansion feature in Kubernetes with the shell
136136
feature, we chose a different syntax similar to the one in Makefiles, `$(var)`. We also chose not
@@ -239,7 +239,7 @@ The necessary changes to implement this functionality are:
239239
`ObjectReference` and an `EventRecorder`
240240
2. Introduce `third_party/golang/expansion` package that provides:
241241
1. An `Expand(string, func(string) string) string` function
242-
2. A `MappingFuncFor(ObjectEventRecorder, ...map[string]string) string` function
242+
2. A `MappingFuncFor(ObjectEventRecorder, ...map[string]string) string` function
243243
3. Make the kubelet expand environment correctly
244244
4. Make the kubelet expand command correctly
245245

@@ -311,7 +311,7 @@ func Expand(input string, mapping func(string) string) string {
311311

312312
#### Kubelet changes
313313

314-
The Kubelet should be made to correctly expand variables references in a container's environment,
314+
The Kubelet should be made to correctly expand variables references in a container's environment,
315315
command, and args. Changes will need to be made to:
316316

317317
1. The `makeEnvironmentVariables` function in the kubelet; this is used by

docs/design/namespaces.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ Each user community has its own:
5252

5353
A cluster operator may create a Namespace for each unique user community.
5454

55-
The Namespace provides a unique scope for:
55+
The Namespace provides a unique scope for:
5656

5757
1. named resources (to avoid basic naming collisions)
5858
2. delegated management authority to trusted users
@@ -142,7 +142,7 @@ type NamespaceSpec struct {
142142

143143
A *FinalizerName* is a qualified name.
144144

145-
The API Server enforces that a *Namespace* can only be deleted from storage if and only if
145+
The API Server enforces that a *Namespace* can only be deleted from storage if and only if
146146
it's *Namespace.Spec.Finalizers* is empty.
147147

148148
A *finalize* operation is the only mechanism to modify the *Namespace.Spec.Finalizers* field post creation.
@@ -189,12 +189,12 @@ are known to the cluster.
189189
The *namespace controller* enumerates each known resource type in that namespace and deletes it one by one.
190190

191191
Admission control blocks creation of new resources in that namespace in order to prevent a race-condition
192-
where the controller could believe all of a given resource type had been deleted from the namespace,
192+
where the controller could believe all of a given resource type had been deleted from the namespace,
193193
when in fact some other rogue client agent had created new objects. Using admission control in this
194194
scenario allows each of registry implementations for the individual objects to not need to take into account Namespace life-cycle.
195195

196196
Once all objects known to the *namespace controller* have been deleted, the *namespace controller*
197-
executes a *finalize* operation on the namespace that removes the *kubernetes* value from
197+
executes a *finalize* operation on the namespace that removes the *kubernetes* value from
198198
the *Namespace.Spec.Finalizers* list.
199199

200200
If the *namespace controller* sees a *Namespace* whose *ObjectMeta.DeletionTimestamp* is set, and
@@ -245,13 +245,13 @@ In etcd, we want to continue to still support efficient WATCH across namespaces.
245245

246246
Resources that persist content in etcd will have storage paths as follows:
247247

248-
/{k8s_storage_prefix}/{resourceType}/{resource.Namespace}/{resource.Name}
248+
/{k8s_storage_prefix}/{resourceType}/{resource.Namespace}/{resource.Name}
249249

250250
This enables consumers to WATCH /registry/{resourceType} for changes across namespace of a particular {resourceType}.
251251

252252
### Kubelet
253253

254-
The kubelet will register pod's it sources from a file or http source with a namespace associated with the
254+
The kubelet will register pod's it sources from a file or http source with a namespace associated with the
255255
*cluster-id*
256256

257257
### Example: OpenShift Origin managing a Kubernetes Namespace
@@ -362,7 +362,7 @@ This results in the following state:
362362

363363
At this point, the Kubernetes *namespace controller* in its sync loop will see that the namespace
364364
has a deletion timestamp and that its list of finalizers is empty. As a result, it knows all
365-
content associated from that namespace has been purged. It performs a final DELETE action
365+
content associated from that namespace has been purged. It performs a final DELETE action
366366
to remove that Namespace from the storage.
367367

368368
At this point, all content associated with that Namespace, and the Namespace itself are gone.

docs/design/persistent-storage.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -41,11 +41,11 @@ Two new API kinds:
4141

4242
A `PersistentVolume` (PV) is a storage resource provisioned by an administrator. It is analogous to a node. See [Persistent Volume Guide](../user-guide/persistent-volumes/) for how to use it.
4343

44-
A `PersistentVolumeClaim` (PVC) is a user's request for a persistent volume to use in a pod. It is analogous to a pod.
44+
A `PersistentVolumeClaim` (PVC) is a user's request for a persistent volume to use in a pod. It is analogous to a pod.
4545

4646
One new system component:
4747

48-
`PersistentVolumeClaimBinder` is a singleton running in master that watches all PersistentVolumeClaims in the system and binds them to the closest matching available PersistentVolume. The volume manager watches the API for newly created volumes to manage.
48+
`PersistentVolumeClaimBinder` is a singleton running in master that watches all PersistentVolumeClaims in the system and binds them to the closest matching available PersistentVolume. The volume manager watches the API for newly created volumes to manage.
4949

5050
One new volume:
5151

@@ -69,7 +69,7 @@ Cluster administrators use the API to manage *PersistentVolumes*. A custom stor
6969

7070
PVs are system objects and, thus, have no namespace.
7171

72-
Many means of dynamic provisioning will be eventually be implemented for various storage types.
72+
Many means of dynamic provisioning will be eventually be implemented for various storage types.
7373

7474

7575
##### PersistentVolume API
@@ -116,7 +116,7 @@ TBD
116116

117117
#### Events
118118

119-
The implementation of persistent storage will not require events to communicate to the user the state of their claim. The CLI for bound claims contains a reference to the backing persistent volume. This is always present in the API and CLI, making an event to communicate the same unnecessary.
119+
The implementation of persistent storage will not require events to communicate to the user the state of their claim. The CLI for bound claims contains a reference to the backing persistent volume. This is always present in the API and CLI, making an event to communicate the same unnecessary.
120120

121121
Events that communicate the state of a mounted volume are left to the volume plugins.
122122

@@ -232,9 +232,9 @@ When a claim holder is finished with their data, they can delete their claim.
232232
$ kubectl delete pvc myclaim-1
233233
```
234234

235-
The ```PersistentVolumeClaimBinder``` will reconcile this by removing the claim reference from the PV and change the PVs status to 'Released'.
235+
The ```PersistentVolumeClaimBinder``` will reconcile this by removing the claim reference from the PV and change the PVs status to 'Released'.
236236

237-
Admins can script the recycling of released volumes. Future dynamic provisioners will understand how a volume should be recycled.
237+
Admins can script the recycling of released volumes. Future dynamic provisioners will understand how a volume should be recycled.
238238

239239

240240
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

0 commit comments

Comments
 (0)