Skip to content

Commit d7e920a

Browse files
Ironic deployment guide documentation
1 parent 6da32e1 commit d7e920a

File tree

3 files changed

+323
-11
lines changed

3 files changed

+323
-11
lines changed

doc/source/configuration/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@ the various features provided.
1111
walled-garden
1212
release-train
1313
host-images
14+
ironic
1415
lvm
1516
swap
1617
cephadm

doc/source/configuration/ironic.rst

Lines changed: 322 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,322 @@
1+
======
2+
Ironic
3+
======
4+
5+
Ironic networking
6+
=================
7+
8+
Ironic will require the workload provisioning and cleaning networks to be
9+
configured in ``networks.yml``
10+
11+
The workload provisioning network will require an allocation pool for
12+
Ironic Inspection and for Neutron. The Inspector allocation pool will be
13+
used to define static addresses for baremetal nodes during inspection and
14+
the Neutron allocation pool is used to assign addresses dynamically during
15+
baremetal provisioning.
16+
17+
.. code-block:: yaml
18+
19+
# Workload provisioning network IP information.
20+
provision_wl_net_cidr: "172.0.0.0/16"
21+
provision_wl_net_allocation_pool_start: "172.0.0.4"
22+
provision_wl_net_allocation_pool_end: "172.0.0.6"
23+
provision_wl_net_inspection_allocation_pool_start: "172.0.1.4"
24+
provision_wl_net_inspection_allocation_pool_end: "172.0.1.250"
25+
provision_wl_net_neutron_allocation_pool_start: "172.0.2.4"
26+
provision_wl_net_neutron_allocation_pool_end: "172.0.2.250"
27+
provision_wl_net_neutron_gateway: "172.0.1.1"
28+
29+
The cleaning network will also require a Neutron allocation pool.
30+
31+
.. code-block:: yaml
32+
33+
# Cleaning network IP information.
34+
cleaning_net_cidr: "172.1.0.0/16"
35+
cleaning_net_allocation_pool_start: "172.1.0.4"
36+
cleaning_net_allocation_pool_end: "172.1.0.6"
37+
cleaning_net_neutron_allocation_pool_start: "172.1.2.4"
38+
cleaning_net_neutron_allocation_pool_end: "172.1.2.250"
39+
cleaning_net_neutron_gateway: "172.1.0.1"
40+
41+
OpenStack Config
42+
================
43+
44+
Overcloud Ironic will be deployed with a listening TFTP server on the
45+
control plane which will provide baremetal nodes that PXE boot with the
46+
Ironic Python Agent (IPA) kernel and ramdisk. Since the TFTP server is
47+
listening exclusively on the internal API network it's neccessary for a
48+
route to exist between the provisoning/cleaning networks and the internal
49+
API network, we can achieve this is by defining a Neutron router using
50+
`OpenStack Config <https://github.com/stackhpc/openstack-config>`.
51+
52+
It not necessary to define the provision and cleaning networks in this
53+
configuration as they will be generated during
54+
55+
.. code-block:: console
56+
57+
kayobe overcloud post configure
58+
59+
The openstack config file could resemble the network, subnet and router
60+
configuration shown below:
61+
62+
.. code-block:: yaml
63+
64+
networks:
65+
- "{{ openstack_network_internal }}"
66+
67+
openstack_network_internal:
68+
name: "internal-net"
69+
project: "admin"
70+
provider_network_type: "vlan"
71+
provider_physical_network: "physnet1"
72+
provider_segmentation_id: 458
73+
shared: false
74+
external: true
75+
76+
subnets:
77+
- "{{ openstack_subnet_internal }}"
78+
79+
openstack_subnet_internal:
80+
name: "internal-net"
81+
project: "admin"
82+
cidr: "10.10.3.0/24"
83+
enable_dhcp: true
84+
allocation_pool_start: "10.10.3.3"
85+
allocation_pool_end: "10.10.3.3"
86+
87+
openstack_routers:
88+
- "{{ openstack_router_ironic }}"
89+
90+
openstack_router_ironic:
91+
- name: ironic
92+
project: admin
93+
interfaces:
94+
- net: "provision-net"
95+
subnet: "provision-net"
96+
portip: "172.0.1.1"
97+
- net: "cleaning-net"
98+
subnet: "cleaning-net"
99+
portip: "172.1.0.1"
100+
network: internal-net
101+
102+
To provision baremetal nodes in Nova you will also require setting a flavour
103+
specific to that type of baremetal host. You will need to replace the custom
104+
resource ``resources:CUSTOM_<YOUR_BAREMETAL_RESOURCE_CLASS>`` placeholder with
105+
the resource class of your baremetal hosts, you will also need this later when
106+
configuring the baremetal-compute inventory.
107+
108+
.. code-block:: yaml
109+
110+
openstack_flavors:
111+
- "{{ openstack_flavor_baremetal_A }}"
112+
# Bare metal compute node.
113+
openstack_flavor_baremetal_A:
114+
name: "baremetal-A"
115+
ram: 1048576
116+
disk: 480
117+
vcpus: 256
118+
extra_specs:
119+
"resources:CUSTOM_<YOUR_BAREMETAL_RESOURCE_CLASS>": 1
120+
"resources:VCPU": 0
121+
"resources:MEMORY_MB": 0
122+
"resources:DISK_GB": 0
123+
124+
Enabling conntrack
125+
==================
126+
127+
Conntrack_helper will be required when UEFI booting on a cloud with ML2/OVS
128+
and using the iptables firewall_driver, otherwise TFTP traffic is dropped due
129+
to it being UDP. You will need to define some extension drivers in ``neutron.yml``
130+
to ensure conntrack is enabled in neutron server.
131+
132+
.. code-block:: yaml
133+
134+
kolla_neutron_ml2_extension_drivers:
135+
port_security
136+
conntrack_helper
137+
dns_domain_ports
138+
139+
The neutron l3 agent also requires conntrack to be set as an extension in
140+
``kolla/config/neutron/l3_agent.ini``
141+
142+
.. code-block:: ini
143+
144+
[agent]
145+
extensions = conntrack_helper
146+
147+
It is also required to load the conntrack kernel module ``nf_nat_tftp``,
148+
``nf_conntrack`` and ``nf_conntrack_tftp`` on network nodes. You can load these
149+
modules using modprobe or define these in /etc/module-load.
150+
151+
The Ironic neutron router will also need to be configured to use
152+
conntrack_helper.
153+
154+
.. code-block:: json
155+
156+
"conntrack_helpers": {
157+
"protocol": "udp",
158+
"port": 69,
159+
"helper": "tftp"
160+
}
161+
162+
To add the conntrack_helper to the neutron router, you can use the openstack
163+
CLI
164+
165+
.. code-block:: console
166+
167+
openstack network l3 conntrack helper create \
168+
--helper tftp \
169+
--protocol udp \
170+
--port 69 \
171+
<ironic_router_uuid>
172+
173+
Baremetal inventory
174+
===================
175+
176+
To begin enrolling nodes you will need to define them in the hosts file.
177+
178+
.. code-block:: ini
179+
180+
[r1]
181+
hv1 ipmi_address=10.1.28.16
182+
hv2 ipmi_address=10.1.28.17
183+
184+
185+
[baremetal-compute:children]
186+
r1
187+
188+
The baremetal nodes will also require some extra variables to be defined
189+
in the group_vars for your rack, these should include the BMC credentials
190+
and the Ironic driver you wish to use.
191+
192+
.. code-block:: yaml
193+
194+
ironic_driver: redfish
195+
196+
ironic_driver_info:
197+
redfish_system_id: "{{ ironic_redfish_system_id }}"
198+
redfish_address: "{{ ironic_redfish_address }}"
199+
redfish_username: "{{ ironic_redfish_username }}"
200+
redfish_password: "{{ ironic_redfish_password }}"
201+
redfish_verify_ca: "{{ ironic_redfish_verify_ca }}"
202+
ipmi_address: "{{ ipmi_address }}"
203+
204+
ironic_properties:
205+
capabilities: "{{ ironic_capabilities }}"
206+
207+
ironic_resource_class: "example_resouce_class"
208+
ironic_redfish_system_id: "/redfish/v1/Systems/System.Embedded.1"
209+
ironic_redfish_verify_ca: "{{ inspector_rule_var_redfish_verify_ca }}"
210+
ironic_redfish_address: "{{ ipmi_address }}"
211+
ironic_redfish_username: "{{ inspector_redfish_username }}"
212+
ironic_redfish_password: "{{ inspector_redfish_password }}"
213+
ironic_capabilities: "boot_option:local,boot_mode:uefi"
214+
215+
The typical layout for baremetal nodes are separated by racks, for instance
216+
in rack 1 we have the following configuration set up where the BMC addresses
217+
are defined for all nodes, and Redfish information such as username, passwords
218+
and the system ID are defined for the rack as a whole.
219+
220+
You can add more racks to the deployment by replicating the rack 1 example and
221+
adding that as an entry to the baremetal-compute group.
222+
223+
Node enrollment
224+
===============
225+
226+
When nodes are defined in the inventory you can begin enrolling them by
227+
invoking the Kayobe commmand
228+
229+
.. code-block:: console
230+
231+
(kayobe) $ kayobe baremetal compute register
232+
233+
Following registration, the baremetal nodes can be inspected and made
234+
available for provisioning by Nova via the Kayobe commands
235+
236+
.. code-block:: console
237+
238+
(kayobe) $ kayobe baremetal compute inspect
239+
(kayobe) $ kayobe baremetal compute provide
240+
241+
Baremetal hypervisors
242+
=====================
243+
244+
To deploy baremetal hypervisor nodes it will be neccessary to split out
245+
the nodes you wish to use as hypervisors and add it to the Kayobe compute
246+
group to ensure the hypervisor is configured as a compute node during
247+
host configure.
248+
249+
.. code-block:: ini
250+
251+
[r1]
252+
hv1 ipmi_address=10.1.28.16
253+
254+
[r1-hyp]
255+
hv2 ipmi_address=10.1.28.17
256+
257+
[r1:children]
258+
r1-hyp
259+
260+
[compute:children]
261+
r1-hyp
262+
263+
[baremetal-compute:children]
264+
r1
265+
266+
The hypervisor nodes will also need to define hypervisor specific variables
267+
such as the image to be used, network to provision on and the availability zone.
268+
These can be defined under group_vars.
269+
270+
.. code-block:: yaml
271+
272+
hypervisor_image: "37825714-27da-48e0-8887-d609349e703b"
273+
key_name: "testing"
274+
availability_zone: "nova"
275+
baremetal_flavor: "baremetal-A"
276+
baremetal_network: "rack-net"
277+
auth:
278+
auth_url: "{{ lookup('env', 'OS_AUTH_URL') }}"
279+
username: "{{ lookup('env', 'OS_USERNAME') }}"
280+
password: "{{ lookup('env', 'OS_PASSWORD') }}"
281+
project_name: "{{ lookup('env', 'OS_PROJECT_NAME') }}"
282+
283+
To begin deploying these nodes as instances you will need to run the Ansible
284+
playbook deploy-baremetal-instance.yml.
285+
286+
.. code-block:: console
287+
288+
(kayobe) $ kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/deploy-baremetal-instance.yml
289+
290+
This playbook will update network allocations with the new baremetal hypervisor
291+
IP addresses, create a Neutron port corresponding to the address and deploy
292+
an image on the baremetal instance.
293+
294+
When the playbook has finished and the rack is successfully imaged, they can be
295+
configured with ``kayobe overcloud host configure`` and kolla compute services
296+
can be deployed with ``kayobe overcloud service deploy``.
297+
298+
Un-enrolling hypervisors
299+
========================
300+
301+
To convert baremetal hypervisors into regular baremetal compute instances you will need
302+
to drain the hypervisor of all running compute instances, you should first invoke the
303+
nova-compute-disable playbook to ensure all Nova services on the baremetal node are disabled
304+
and compute instances will not be allocated to this node.
305+
306+
.. code-block:: console
307+
308+
(kayobe) $ kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/nova-compute-disable.yml
309+
310+
Now the Nova services are disabled you should also ensure any existing compute instances
311+
are moved elsewhere by invoking the nova-compute-drain playbook
312+
313+
.. code-block:: console
314+
315+
(kayobe) $ kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/nova-compute-drain.yml
316+
317+
Now the node has no instances allocated to it you can delete the instance using
318+
the OpenStack CLI and the node will be moved back to ``available`` state.
319+
320+
.. code-block:: console
321+
322+
(os-venv) $ openstack server delete ...

etc/kayobe/ansible/deploy-baremetal-instance.yml

Lines changed: 0 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -49,17 +49,6 @@
4949
hosts: compute
5050
gather_facts: false
5151
connection: local
52-
vars:
53-
hypervisor_image: "37825714-27da-48e0-8887-d609349e703b"
54-
key_name: "testing"
55-
availability_zone: "nova"
56-
baremetal_flavor: "baremetal-A"
57-
baremetal_network: "rack-net"
58-
auth:
59-
auth_url: "{{ lookup('env', 'OS_AUTH_URL') }}"
60-
username: "{{ lookup('env', 'OS_USERNAME') }}"
61-
password: "{{ lookup('env', 'OS_PASSWORD') }}"
62-
project_name: "{{ lookup('env', 'OS_PROJECT_NAME') }}"
6352
tasks:
6453
- name: Show baremetal node
6554
ansible.builtin.shell:

0 commit comments

Comments
 (0)