Skip to content

Commit

Permalink
Issue #13 - SFC examples added
Browse files Browse the repository at this point in the history
  • Loading branch information
Krzysztof Bijakowski committed Jul 26, 2017
1 parent 59017d2 commit 0d329a5
Show file tree
Hide file tree
Showing 11 changed files with 2,111 additions and 0 deletions.
233 changes: 233 additions & 0 deletions examples/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,233 @@
# Examples

All presented below examples can be run on previously created Kubernetes Cluster (or single Node + Master).
To setup such cluster you should use this blueprint: https://github.com/cloudify-examples/simple-kubernetes-blueprint/tree/4.0.1


### Simple example

*simple-example-blueprint.yaml*

TODO

### Replicasets

*replicasets-example-blueprint.yaml*

TODO

### Persistent volumes

*persistent_volumes-example-blueprint*

TODO

### Service chaining

There are 3 blueprints defined as examples of container-based service chaining for kubernetes.
These scenarios are using Linux bridging and static routing to provide chain connectivity between separate pods.
Implementations of all scenarios are done using utilities-plugin.
Separate generic blueprint *vnf-blueprint* is used to define each pod and network interfaces by *service_chain* deployment.
In other words main (*service_chain*) deployment is creating separate deployments from *vnf-blueprint* for each pod using utilities-plugin.

So, before start you need to upload *vnf-blueprint* to Cloudify Manager. You can do it by:

```
cfy blueprints upload vnf-blueprint.yaml -b service_chain_vnf_component
```

It is also need to upload wagons for plugins used in blueprints:

```
cfy plugins upload https://github.com/cloudify-incubator/cloudify-utilities-plugin/releases/download/1.2.5/cloudify_utilities_plugin-1.2.5-py27-none-linux_x86_64-centos-Core.wgn
cfy plugins upload https://github.com/cloudify-incubator/cloudify-kubernetes-plugin/releases/download/1.0.0/cloudify_kubernetes_plugin-1.0.0-py27-none-linux_x86_64.wgn
cfy plugins upload http://repository.cloudifysource.org/cloudify/wagons/cloudify-fabric-plugin/1.5/cloudify_fabric_plugin-1.5-py27-none-linux_x86_64-centos-Core.wgn
```

Last step before making a deployments is to provide Kubernetes API credentials as a secrets.
Using this approach the will be reusable for all deployments.
You can do it executing:

```
cfy secrets create kubernetes_master_ip -s [IP ADDRESS OF KUBERNETES API]
cfy secrets create kubernetes_master_user -s [SSH USERNAME FOR KUBERNETES MASTER]
cfy secrets create kubernetes_master_ssh_key_path -s [SSH KEY FILE PATH FOR KUBERNETES MASTER]
```


#### Example 1

*service_chain_1-example-blueprint.yaml*

Use case deploys chain with 3 containers:
* client
* VNF (router)
* server

![sfc_uc1](https://user-images.githubusercontent.com/20417307/28112813-b29b6a5c-66fa-11e7-8ecd-8c219a984412.jpg)

You can deploy it executing:

```
cfy install -b service_chain_1 service_chain_1-example-blueprint.yaml
```

You can verify if this setup has been deployed correctly on Kuberentes VM using command line:

1. Check if all pod has been created. Execute:

*kubectl get pods*

You should see 3 pods. All have to be in 'Running' state:

```
NAME READY STATUS RESTARTS AGE
client 1/1 Running 0 2m
router 1/1 Running 0 2m
server 1/1 Running 0 1m
```

2. Attach to 'client' console:

*kubectl attach client -it*

3. Perform tests for server connectivity ping ICMP traffic should pass

*ping 192.168.1.7*

4. Try to establish a ssh session. You should have possibility of making a connection.

*ssh [email protected]*

password: *test*

5. Check if HTTP server is responding.

*curl 192.168.1.7:8080*

HTTP traffic should pass. A standard python SimpleHTTPServer directory listing should be present.

*curl 192.168.1.7:8080/?q=banned*

Expected 404 error.


#### Example 2

*service_chain_2-example-blueprint.yaml*

Use case deploys chain with 4 containers:
* client
* VNF (router)
* VNF (firewall)
* server

![sfc_uc2](https://user-images.githubusercontent.com/20417307/28112823-b7632502-66fa-11e7-9851-0bdc96017a4a.jpg)

You can deploy it executing:

```
cfy install -b service_chain_2 service_chain_2-example-blueprint.yaml
```

You can verify if this setup has been deployed correctly on Kuberentes VM using command line:

1. Check if all pod has been created. Execute:

*kubectl get pods*

You should see 4 pods. All have to be in 'Running' state:

```
NAME READY STATUS RESTARTS AGE
client 1/1 Running 0 2m
router 1/1 Running 0 2m
firewall 1/1 Running 0 2m
server 1/1 Running 0 1m
```

2. Attach to 'client' console:

*kubectl attach client -it*

3. Perform tests for server connectivity ping ICMP traffic should pass

*ping 192.168.1.7*

4. Try to establish a ssh session.
TCP SSH traffic should be blocked by firewall.
Making new connection should be impossible.

*ssh [email protected]*

5. Check if HTTP server is responding.

*curl 192.168.1.7:8080*

HTTP traffic should pass. A standard python SimpleHTTPServer directory listing should be present.

*curl 192.168.1.7:8080/?q=banned*

Expected 404 error.


#### Example 3

*service_chain_3-example-blueprint.yaml*

Use case deploys chain with 5 containers:
* client
* VNF (router)
* VNF (firewall)
* VNF (URL filter)
* server

![sfc_uc3](https://user-images.githubusercontent.com/20417307/28112833-be9eb232-66fa-11e7-8ab5-dbdca51bda99.jpg)

You can deploy it executing:

```
cfy install -b service_chain_3 service_chain_3-example-blueprint.yaml
```

You can verify if this setup has been deployed correctly on Kuberentes VM using command line:

1. Check if all pod has been created. Execute:

*kubectl get pods*

You should see 5 pods. All have to be in 'Running' state:

```
NAME READY STATUS RESTARTS AGE
client 1/1 Running 0 2m
router 1/1 Running 0 2m
firewall 1/1 Running 0 2m
filter 1/1 Running 0 2m
server 1/1 Running 0 1m
```

2. Attach to 'client' console:

*kubectl attach client -it*

3. Perform tests for server connectivity ping ICMP traffic should pass

*ping 192.168.1.7*

4. Try to establish a ssh session.
TCP SSH traffic should be blocked by firewall.
Making new connection should be impossible.

*ssh [email protected]*

5. Check if HTTP server is responding.

*curl 192.168.1.7:8080*

HTTP traffic should pass. A standard python SimpleHTTPServer directory listing should be present.

*curl 192.168.1.7:8080/?q=banned*

HTTP traffic should pass. A web page with information about a banned request is displayed
17 changes: 17 additions & 0 deletions examples/common/docker.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
node_types:
cloudify.docker.ImageBuilder:
derived_from: cloudify.nodes.Root
properties:
name:
type: string
dockerfile_content:
type: string
interfaces:
cloudify.interfaces.lifecycle:
create:
implementation: fabric.fabric_plugin.tasks.run_task
inputs:
tasks_file:
default: scripts/docker/image_builder.py
task_name:
default: create
72 changes: 72 additions & 0 deletions examples/common/networking.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
node_types:
cloudify.kubernetes.networking.linux.ConnectionPoint:
derived_from: cloudify.nodes.Root
properties:
pod_name:
type: string
name:
type: string
ip:
type: string
down:
type: boolean
default: false
interfaces:
cloudify.interfaces.lifecycle:
create:
implementation: fabric.fabric_plugin.tasks.run_task
inputs:
tasks_file:
default: scripts/networking/linux/connection_point.py
task_name:
default: create

cloudify.kubernetes.networking.linux.VirtualLink:
derived_from: cloudify.nodes.Root
properties:
name:
type: string
ip:
type: string
input_interface:
type: string
output_interface:
type: string
interfaces:
cloudify.interfaces.lifecycle:
create:
implementation: fabric.fabric_plugin.tasks.run_task
inputs:
tasks_file:
default: scripts/networking/linux/virtual_link.py
task_name:
default: create
delete:
implementation: fabric.fabric_plugin.tasks.run_task
inputs:
tasks_file:
default: scripts/networking/linux/virtual_link.py
task_name:
default: delete

cloudify.kubernetes.networking.linux.ForwardingPath:
derived_from: cloudify.nodes.Root
properties:
members:
description: ''
interfaces:
cloudify.interfaces.lifecycle:
create:
implementation: fabric.fabric_plugin.tasks.run_task
inputs:
tasks_file:
default: scripts/networking/linux/forwarding_path.py
task_name:
default: create
delete:
implementation: fabric.fabric_plugin.tasks.run_task
inputs:
tasks_file:
default: scripts/networking/linux/forwarding_path.py
task_name:
default: delete
56 changes: 56 additions & 0 deletions examples/scripts/docker/image_builder.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
from fabric.api import sudo
from cloudify import ctx
from cloudify.exceptions import NonRecoverableError


def _get_input_parameter(name, kwargs):
parameter = ctx.node.properties.get(name, kwargs.get(name, None))

if not parameter:
raise NonRecoverableError(
'Mandatory input parameter {0} for lifecycle script not specified'
.format(name)
)

return parameter


def _image_exisis(name):
ctx.logger.info('Checking if image {0} does exist'.format(name))

command = 'docker images {0} -q'.format(name)
ctx.logger.info('Executing command: {0}'.format(command))

return bool(sudo(command))


def _build(name, dockerfile_content):
ctx.logger.info('Building image {0} ...'.format(name))

command = 'echo "" > {0}_temp.dockerfile'.format(name)
ctx.logger.info('Executing command: {0}'.format(command))
sudo(command)

for line in dockerfile_content:
command = 'echo "{0}" >> {1}_temp.dockerfile'.format(line, name)
ctx.logger.info('Executing command: {0}'.format(command))
sudo(command)

command = 'docker build -t {0} -f {0}_temp.dockerfile .'.format(name)
ctx.logger.info('Executing command: {0}'.format(command))
sudo(command)

ctx.logger.info('Build success')


def create(**kwargs):
name = _get_input_parameter('name', kwargs)
dockerfile_content = _get_input_parameter('dockerfile_content', kwargs)

ctx.logger.info('Docker image builder started for image {0}'.format(name))

if not _image_exisis(name):
_build(name, dockerfile_content)
return

ctx.logger.warn('Image {0} already exists. Exiting.'.format(name))
Loading

0 comments on commit 0d329a5

Please sign in to comment.