Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add docs for instantiated systemd services #172

Open
wants to merge 1 commit into
base: config-from-env
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 15 additions & 5 deletions doc/01-About.md
Original file line number Diff line number Diff line change
@@ -24,11 +24,21 @@ including the database, common setup approaches include the following:

## Multi-Cluster Support

Icinga for Kubernetes supports multiple Kubernetes clusters by deploying several daemons,
each connecting to a different cluster but writing data into the same database.
The web interface accesses this database to display resource information and state,
offering the flexibility to view aggregated data from all clusters or focus on a specific cluster.
This setup ensures scalable monitoring and a unified view of resources across multiple Kubernetes environments.
Icinga for Kubernetes provides two approaches for monitoring multiple Kubernetes clusters.

**Option 1**: The Icinga for Kubernetes daemons are installed directly within each Kubernetes cluster.
Each daemon connects to a central database - which resides outside the clusters - through an external service.
This database serves as the unified data source for all monitored clusters. The web interface is also hosted
outside the clusters, allowing for an aggregated view of resources from all clusters or a focused view on a
specific cluster. This architecture ensures monitoring is handled locally within the clusters while centralizing
data storage and visualization outside.

**Option 2**: All components, including the Icinga for Kubernetes daemons and the web interface, operate entirely
outside the Kubernetes clusters. Instead of being deployed within the clusters, multiple systemd service instances
are started on an external system, with each instance connecting to a different cluster.

More about multi-cluster support can be found under
[Configuration](03-Configuration.md#multi-cluster-support-using-systemd-instantiated-services).

## Vision and Roadmap

78 changes: 67 additions & 11 deletions doc/02-Installation.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
<!-- {% if index %} -->

# Installing Icinga for Kubernetes

![Icinga for Kubernetes](res/icinga-kubernetes-installation.png)
@@ -62,31 +63,85 @@ Icinga for Kubernetes installs its configuration file to `/etc/icinga-kubernetes
pre-populating most of the settings for a local setup. Before running Icinga for Kubernetes,
adjust the database credentials and, if necessary, the connection configuration.
The configuration file explains general settings.
All available settings can be found under [Configuration](03-Configuration.md).

##### Running Icinga for Kubernetes
All available settings can be found under [Configuration](03-Configuration.md#configuration-via-yaml-file).

The `icinga-kubernetes` package automatically installs the required systemd unit files to run Icinga for Kubernetes.
The service instances are configured via environment files in `/etc/icinga-kubernetes`.
More about the configuration via environment files can be found under
[Configuration](03-Configuration.md#managing-instances-with-environment-files).

To connect to a Kubernetes cluster, a locally accessible
[kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) file is needed.
You can specify which kubeconfig file to use by setting the `KUBECONFIG` environment variable for
the Icinga for Kubernetes systemd service.
To do this, run `systemctl edit icinga-kubernetes` and add the following:

If you're only planning to monitor a single Kubernetes cluster, you can simply edit
`/etc/icinga-kubernetes/default.env`.
This file serves as the configuration for your Icinga for Kubernetes default instance. It contains all the necessary
parameters to connect to your Kubernetes cluster, such as the `KUBECONFIG` variable pointing to your kubeconfig file.
More about the `default.env` file can be found under [Configuration](03-Configuration.md#default-environment).

##### Configuring multiple Instances of Icinga for Kubernetes for Multi-Cluster Support

If you're planning to monitor multiple Kubernetes clusters, you can add additional environment files.

**Add a new Instance**:

1. Create a new environment file in `/etc/icinga-kubernetes`. The file name will be the instance name for the
systemd service. For example `test-cluster.env` will start the service instance `icinga-kubernetes@test-cluster`.
2. Set the `KUBECONFIG` environment variable to configure how Icinga for Kubernetes can connect to the cluster.
3. Set the `ICINGA_FOR_KUBERNETES_CLUSTER_NAME` environment variable to configure the cluster name. If the environment
variable is not set the cluster name will be the environment file's name.
4. You can add additional environment variables to override the `config.yml`
([Available environment variables](03-Configuration.md#configuration-via-environment-variables)).
5. Reload the systemd daemon with `systemctl daemon-reload` to recognize the new cluster configs.

An example `test-cluster.env` file could look like the following:

```bash
[Service]
Environment="KUBECONFIG=..."
KUBECONFIG=$HOME/.kube/config
ICINGA_FOR_KUBERNETES_CLUSTER_NAME="Test Cluster"
ICINGA_FOR_KUBERNETES_PROMETHEUS_URL=http://localhost:9090
```

Please run the following command to enable and start the Icinga for Kubernetes service:
**Remove Instance**:

1. If running, stop the service instance manually. For `test-cluster` it would be
`systemctl stop icinga-kubernetes@test-cluster`.
2. Remove the corresponding environment file from `/etc/icinga-kubernetes`.
3. Reload the systemd daemon with `systemctl daemon-reload` to make sure the daemon forgets the file.

!!! Warning

If you stop the service without removing the environment file, the instance will restart when the service is
restarted.

If you remove the environment file without stopping the instance, the instance will try to restart and
fail when the service is restarted.

You can also explicitly define which environment files should be used to start service instances. For this,
you can adjust the `/etc/default/icinga-kubernetes` file.
More about the this can be found under [Configuration](03-Configuration.md#service-configuration).

##### Running Icinga for Kubernetes

After configuring, please run the following command to enable and start all configured Icinga for Kubernetes
service instances:

```bash
systemctl enable --now icinga-kubernetes
```

##### Stopping Icinga for Kubernetes

The following command will stop all running Icinga for Kubernetes services instances:

```bash
systemctl stop icinga-kubernetes
```

#### Using a Container

Before running Icinga for Kubernetes, create a local `config.yml` using [the sample configuration](../config.example.yml)
Before running Icinga for Kubernetes, create a local `config.yml`
using [the sample configuration](../config.example.yml)
adjust the database credentials and, if necessary, the connection configuration.
The configuration file explains general settings.
All available settings can be found under [Configuration](03-Configuration.md).
@@ -125,7 +180,8 @@ go build -o icinga-kubernetes cmd/icinga-kubernetes/main.go

##### Configuring Icinga for Kubernetes

Before running Icinga for Kubernetes, create a local `config.yml` using [the sample configuration](../config.example.yml)
Before running Icinga for Kubernetes, create a local `config.yml`
using [the sample configuration](../config.example.yml)
adjust the database credentials and, if necessary, the connection configuration.
The configuration file explains general settings.
All available settings can be found under [Configuration](03-Configuration.md).
66 changes: 66 additions & 0 deletions doc/03-Configuration.md
Original file line number Diff line number Diff line change
@@ -97,3 +97,69 @@ The configurations set by environment variables override the ones set by YAML.
| PROMETHEUS_URL | **Optional.** Prometheus server URL. If not set, metric synchronization is disabled. |
| PROMETHEUS_USERNAME | **Optional.** Prometheus username. |
| PROMETHEUS_PASSWORD | **Optional.** Prometheus password. |

## Multi-Cluster Support using systemd Instantiated Services

Starting from Icinga for Kubernetes version 0.3.0, multi-cluster support has been streamlined through
systemd instantiated services. This approach allows you to run Icinga for Kubernetes components outside of the
Kubernetes clusters themselves while enabling you to monitor multiple Kubernetes clusters. By leveraging systemd,
you can manage separate instances of Icinga for Kubernetes, each connecting to a different cluster, without the need
to install components directly inside the clusters.

### Managing Instances with Environment Files

Each instance of Icinga for Kubernetes is managed through an environment file (.env). These environment files contain
the necessary configurations for connecting to specific Kubernetes clusters. Generally, the key configuration for each
instance is the `KUBECONFIG`, which points to the kubeconfig file for the relevant cluster. However, it’s also possible
to override other configurations depending on your needs.

The cluster name is typically derived from the environment file name, but you can override this default behavior using
the `ICINGA_FOR_KUBERNETES_CLUSTER_NAME` variable. This cluster name is used throughout the frontend to identify and
organize the monitoring data associated with that cluster.

### Default Environment

The `default.env` file is the default instance configuration. If you're only managing a single cluster, you can simply
edit this file to configure your connection to that cluster. The `default.env` file contains the basic settings needed
for the Icinga for Kubernetes daemon to connect to a Kubernetes cluster, including the `KUBECONFIG` variable, which
points to the kubeconfig file for the cluster.

However, if you’re planning to monitor multiple clusters, you’ll want to create additional environment files for each
additional cluster, as described in the earlier section.

### Service Configuration

The `/etc/default/icinga-kubernetes` file allows you to control which Icinga for Kubernetes service instances should be
started automatically. This provides flexibility when managing multiple clusters by defining which environment files
should be used for systemd service instances.

The `AUTOSTART` variable in `/etc/default/icinga-kubernetes` determines which clusters are automatically started.

The allowed values are:

* **all** (default if empty) – Starts all instances corresponding to environment files in `/etc/icinga-kubernetes/`.
* **none** – Prevents automatic startup of any instances.
* **A space-separated list of cluster names** – Starts only the specified instances, where each name corresponds to an
environment file.

For example, to start only test-cluster and prod-cluster, set:

```bash
AUTOSTART="test-cluster prod-cluster"
```

This will start `icinga-kubernetes@test-cluster` and `icinga-kubernetes@prod-cluster`, using the configuration from
`/etc/icinga-kubernetes/test-cluster.env` and `/etc/icinga-kubernetes/prod-cluster.env`, respectively.

After modifying this file, you must reload the systemd configuration:

```bash
systemctl daemon-reload
```

If you removed clusters from the `AUTOSTART` list, you may need to manually stop the corresponding instances before
restarting the service:

```bash
systemctl stop icinga-kubernetes@old-cluster
```

Unchanged files with check annotations Beta

}
// RoundTrip executes a single HTTP transaction with the basic auth credentials.
func (t *BasicAuthTransport) RoundTrip(req *http.Request) (*http.Response, error) {

Check failure on line 15 in pkg/com/basic_auth_transport.go

GitHub Actions / build-and-test

leaking param content: t

Check failure on line 15 in pkg/com/basic_auth_transport.go

GitHub Actions / build-and-test

leaking param: req
req.SetBasicAuth(t.Username, t.Password)
return t.RoundTripper.RoundTrip(req)
return errors.Wrapf(err, "cannot marshal notifications event data of type: %T", e)
}
req, err := http.NewRequestWithContext(ctx, http.MethodPost, c.processEventUrl, bytes.NewReader(body))

Check failure on line 56 in pkg/notifications/client.go

GitHub Actions / build-and-test

inlining call to bytes.NewReader
if err != nil {
return errors.Wrap(err, "cannot create new notifications http request")
}
req.Header.Add("Content-Type", "application/json")

Check failure on line 61 in pkg/notifications/client.go

GitHub Actions / build-and-test

inlining call to http.Header.Add

Check failure on line 61 in pkg/notifications/client.go

GitHub Actions / build-and-test

inlining call to textproto.MIMEHeader.Add
res, err := c.client.Do(req)

Check failure on line 63 in pkg/notifications/client.go

GitHub Actions / build-and-test

inlining call to http.(*Client).Do
if err != nil {
return errors.Wrap(err, "cannot send notifications event")
}
defer func() {

Check failure on line 68 in pkg/notifications/client.go

GitHub Actions / build-and-test

can inline (*Client).ProcessEvent.func1
_ = res.Body.Close()
}()
}
if err := c.ProcessEvent(ctx, entity.(Marshaler)); err != nil {
klog.Error(err)

Check failure on line 90 in pkg/notifications/client.go

GitHub Actions / build-and-test

inlining call to klog.Error

Check failure on line 90 in pkg/notifications/client.go

GitHub Actions / build-and-test

inlining call to klog.(*loggingT).print
}
case <-ctx.Done():
return ctx.Err()
func (c *Config) Validate() error {
if c.Url != "" || c.Username != "" || c.Password != "" {
if c.Url == "" || c.Username == "" || c.Password == "" {
return errors.New("if one of 'url', 'username', or 'password' is set, all must be set")

Check failure on line 21 in pkg/notifications/config.go

GitHub Actions / build-and-test

inlining call to errors.New
}
usernameValid, err := regexp.MatchString(`^source-\d+$`, c.Username)