-
Notifications
You must be signed in to change notification settings - Fork 88
Add doc for newly introduced feature Virtual Private Cloud #838
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
6f11c5d
to
b3edbc7
Compare
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
questions:
- Should
Vm
beVM
? - There are 3
VM1
in the left side? - Are
vpcpeer1
andvpcpeer2
just symbols here to represent it supports VPN peering? Because there is no lines or other things defines those peerings relationship.
docs/networking/harvester-network.md
Outdated
|
||
|
||
### How to use overlay network | ||
To create a new overlay network, go to the **Networks > VM Networks** page and click the **Create** button. You have to specify the name, select the type `OverlayNetwork`. You don't need to specify the cluster network since the overlay network is only enabled on the default management network. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add UI screenshots to explain how to create an overlay network, VPC, subnet, etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Also, need to mention that this function will activate when enabling the KubeOVN addon.
- Need to add the KubeOVN addon to https://docs.harvesterhci.io/v1.6/advanced/addons.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@innobead See https://docs.harvesterhci.io/v1.6/advanced/addons/kubeovn-operator.
I will update the main add-on page in an upcoming PR. There are minor issues that need to be fixed in recently merged PRs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@innobead If you meant the kubeovn-operator add-on, it's at https://docs.harvesterhci.io/v1.6/advanced/addons/kubeovn-operator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One thing to note is that if the kubeovn-operator add-on is classified as experimental, we don't need to put it to the list on the Addon main page.
docs/networking/harvester-network.md
Outdated
• Underlay networking is not yet implemented, so there is no way to map a subnet directly to a physical network. Consequently, external hosts cannot reach VMs that live on an overlay subnet. | ||
• Any subnet created in a user-defined VPC has natOutgoing: false by default. The field must be manually set to true; otherwise, VMs on the subnet will not be able to reach the Internet even when the gateway is correctly configured. | ||
|
||
Future roadmap |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please create issues for the following items and mention them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mingshuoqiu please also include the limitation of external connectivity from VM is possible only on subnets connected via default vpc (ovn-cluster) harvester/harvester#8690
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mingshuoqiu One more observation is, Similar to other secondary interfaces in VMs, multiple interfaces can be created on a VM with type overlay.And always the first interface in the guest os will be up and ip address will be assigned unless specified in cloud init.So in case of multiple interfaces on a VM either the secondary interfaces and dhcp must be provided in cloud init or user has to explicitly bring up the interfaces and run dhclient to obtain ip address.In case of overlay networks, the dhcp server is not external but the dhcp running in the ovn-cluster for ip allocation.
So since the pods get the ip address, the UI will show the ip addresses of all interfaces in VM using overlay networks though multus, but actual guest os interface will not have ip address unless user executes "ip link set dev enp2s0 up" and "dhclient enp2s0". I think users must be aware of how ip gets allocated for overlay networks and they need dhclient for secondary interfaces.
docs/networking/kubeovn-vpc.md
Outdated
@@ -0,0 +1,667 @@ | |||
## **Virtual Private Cloud (VPC) - Concepts & Architecture** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
docs/networking/kubeovn-vpc.md
Outdated
| L2 | Overlay Network | `vswitch1` | Virtual Layer 2 switch that connects VMs; carries subnet traffic. | | ||
| L2/L3 | Virtual Machine | `vm1-vswitch1` | Attached to an Overlay Network; receives IP/Gateway from its subnet. | | ||
|
||
#### ASCII Diagram |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Use a diagram drawing instead of ASCII for consistency with other pages. May need @jillian-maroket help here in the future.
docs/networking/kubeovn-vpc.md
Outdated
|
||
*Note:You must enable `kubeovn-operator` to deploy Kube-OVN to a Harvester cluster for advanced SDN capabilities such as virtual private cloud (VPC) and subnets for virtual machine workloads. | ||
|
||
1. On the Harvester UI, go to **Advanced** > **Add-ons**. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Add UI screenshots to the tutorial steps.
docs/networking/kubeovn-vpc.md
Outdated
|
||
|
||
|
||
***Test steps:*** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test steps should be added to the validating issue for QA. If you want to include examples for primary features here, you should call them out as examples instead of test steps.
Add these use cases in a specific section instead to let users understand what functions are provided. Mixing them in VPC Components Overview
is confusing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@innobead I added the examples to the VPC creation and configuration procedure and repurposed the validation part.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mingshuoqiu I completely restructured and rewrote the main sections. Please check what I did and try to apply the same structure in the last 3 sections. The information you provided is useful. We just need to repackage most of it.
docs/networking/harvester-network.md
Outdated
|
||
## Overlay Network | ||
|
||
The [Harvester network-controller](https://github.com/harvester/harvester-network-controller) leverages the [kube-ovn] (https://github.com/kubeovn/kube-ovn) to create OVN-based Virtualized Network and provide a bridge for connection. It helps to connect your VMs to the virtualized network which supports the VPC (Virtual Private Cloud) and Subnet to provide SDN features like Multi-Tenancy, Micro-Segmentation, Isolation...etc. The overlay network can be attached to the Subnet created in Virtual Private Cloud so that VM can access the internal virtualized network and reach the external network. However, the VM can not be accessed by external network like VLAN and Untagged network due to the current limitation of the Virtual Private Cloud. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The [Harvester network-controller](https://github.com/harvester/harvester-network-controller) leverages the [kube-ovn] (https://github.com/kubeovn/kube-ovn) to create OVN-based Virtualized Network and provide a bridge for connection. It helps to connect your VMs to the virtualized network which supports the VPC (Virtual Private Cloud) and Subnet to provide SDN features like Multi-Tenancy, Micro-Segmentation, Isolation...etc. The overlay network can be attached to the Subnet created in Virtual Private Cloud so that VM can access the internal virtualized network and reach the external network. However, the VM can not be accessed by external network like VLAN and Untagged network due to the current limitation of the Virtual Private Cloud. | |
The [Harvester network-controller](https://github.com/harvester/harvester-network-controller) leverages [Kube-OVN] (https://github.com/kubeovn/kube-ovn) to create an OVN-based virtualized network that supports advanced SDN capabilities such as virtual private cloud (VPC) and subnets for virtual machine workloads. | |
An overlay network represents a virtual layer 2 switch that encapsulates and forwards traffic between virtual machines. This network can be linked to the subnet created in the VPC so that virtual machines can access the internal virtualized network and also reach the external network. However, the same virtual machines cannot be accessed by external networks such as VLANs and untagged networks because of current VPC limitations. | |
 |
docs/networking/harvester-network.md
Outdated
### How to use overlay network | ||
To create a new overlay network, go to the **Networks > VM Networks** page and click the **Create** button. You have to specify the name, select the type `OverlayNetwork`. You don't need to specify the cluster network since the overlay network is only enabled on the default management network. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
### How to use overlay network | |
To create a new overlay network, go to the **Networks > VM Networks** page and click the **Create** button. You have to specify the name, select the type `OverlayNetwork`. You don't need to specify the cluster network since the overlay network is only enabled on the default management network. | |
### Create an Overlay Network | |
1. Go to **Networks > Virtual Machine Networks**, and then click **Create**. | |
1. On the **Virtual Machine Network:Create** screen, specify a name for the network. | |
1. On the **Basics** tab, select `OverlayNetwork` as the network type. | |
Specifying a cluster network is not required because the overlay network is only enabled on `mgmt` (the built-in management network). | |
1. Click **Create**. |
docs/networking/harvester-network.md
Outdated
### How to use overlay network | ||
To create a new overlay network, go to the **Networks > VM Networks** page and click the **Create** button. You have to specify the name, select the type `OverlayNetwork`. You don't need to specify the cluster network since the overlay network is only enabled on the default management network. | ||
|
||
The overlay network will act as the `Provider` of the Subnet which is created in `Virtual Private Cloud`. Each Subnet must be mapped to exactly one Overlay Network, and vice versa (1:1 relationship). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The overlay network will act as the `Provider` of the Subnet which is created in `Virtual Private Cloud`. Each Subnet must be mapped to exactly one Overlay Network, and vice versa (1:1 relationship). | |
The overlay network functions as the provider of the subnet that is created in the virtual private cloud. Because of this, each subnet must be mapped to only one overlay network, and each overlay network can be used by only one subnet. This one-to-one relationship ensures that routing behavior is clear and predictable, subnets are isolated, and routing conflicts and traffic leakage are avoided. |
docs/networking/harvester-network.md
Outdated
:::note | ||
Current limitation in Harvester 1.6 | ||
• Overlay networks backed by Kube-OVN can only be created on the default cluster - management network. | ||
• Creating an overlay network on any newly created ClusterNetwork is not supported in this release. | ||
• VMs attached to a Kube-OVN overlay subnet must manually add the subnet’s gateway IP as their default route; the DHCP offer does not automatically install the route, so external access fails until the user fixes it inside the guest OS. | ||
• Underlay networking is not yet implemented, so there is no way to map a subnet directly to a physical network. Consequently, external hosts cannot reach VMs that live on an overlay subnet. | ||
• Any subnet created in a user-defined VPC has natOutgoing: false by default. The field must be manually set to true; otherwise, VMs on the subnet will not be able to reach the Internet even when the gateway is correctly configured. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
:::note | |
Current limitation in Harvester 1.6 | |
• Overlay networks backed by Kube-OVN can only be created on the default cluster - management network. | |
• Creating an overlay network on any newly created ClusterNetwork is not supported in this release. | |
• VMs attached to a Kube-OVN overlay subnet must manually add the subnet’s gateway IP as their default route; the DHCP offer does not automatically install the route, so external access fails until the user fixes it inside the guest OS. | |
• Underlay networking is not yet implemented, so there is no way to map a subnet directly to a physical network. Consequently, external hosts cannot reach VMs that live on an overlay subnet. | |
• Any subnet created in a user-defined VPC has natOutgoing: false by default. The field must be manually set to true; otherwise, VMs on the subnet will not be able to reach the Internet even when the gateway is correctly configured. | |
### Limitations | |
The overlay network implementation in Harvester v1.6 has the following limitations: | |
- Overlay networks that are backed by Kube-OVN can only be created on `mgmt` (the built-in management network). | |
- If a virtual machine is attached to a Kube-OVN overlay subnet, you must manually add the subnet’s gateway IP as the virtual machine's default route. Attempts to access external destinations fail until you add the route from within the guest operating system. | |
- Underlay networking is still unavailable. Consequently, you cannot directly map a subnet to a physical network, and external hosts cannot reach virtual machines that live on an overlay subnet. | |
- The `natOutgoing` field is set to `false` by default in any subnet that is created in a user-defined VPC. If you do not change the value to `true`, virtual machines on the subnet are unable to reach the internet even when the gateway is correctly configured. |
docs/networking/harvester-network.md
Outdated
Future roadmap | ||
• Support for provisioning overlay networks on user-defined ClusterNetworks is targeted for a later release. | ||
• DHCP default-route injection | ||
• Underlay networking support | ||
• Outbound-NAT default policy in user VPCs | ||
::: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We only document implemented features. Mentioning roadmap items is tricky because it might set unrealistic expectations. I suggest removing this part.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good.
docs/networking/kubeovn-vpc.md
Outdated
|
||
|
||
|
||
***Test steps:*** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@innobead I added the examples to the VPC creation and configuration procedure and repurposed the validation part.
docs/networking/kubeovn-vpc.md
Outdated
**4.Creat VM** | ||
|
||
**Name: vm1-vswitch1** | ||
|
||
**Basic** | ||
|
||
CPU:1 | ||
|
||
Memory:2 | ||
|
||
**Volumes** | ||
|
||
Image:Enter your cloudimg, for example: noble-server-cloudimg-amd64 | ||
|
||
**Networks** | ||
|
||
Network: default/vswitch1 | ||
|
||
**Advanced Options** | ||
``` | ||
users: | ||
|
||
` `- name: ubuntu | ||
|
||
` `groups: [ sudo ] | ||
|
||
` `shell: /bin/bash | ||
|
||
` `sudo: ALL=(ALL) NOPASSWD:ALL | ||
|
||
` `lock\_passwd: false | ||
|
||
``` | ||
**Name: vm2-vswitch1** | ||
|
||
**Basic** | ||
|
||
CPU:1 | ||
|
||
Memory:2 | ||
|
||
**Volumes** | ||
|
||
Image:Enter your cloudimg, for example: noble-server-cloudimg-amd64 | ||
|
||
**Networks** | ||
|
||
Network: default/vswitch1 | ||
|
||
**Advanced Options** | ||
``` | ||
users: | ||
|
||
` `- name: ubuntu | ||
|
||
` `groups: [ sudo ] | ||
|
||
` `shell: /bin/bash | ||
|
||
` `sudo: ALL=(ALL) NOPASSWD:ALL | ||
|
||
` `lock\_passwd: false | ||
|
||
``` | ||
**Name: vm1-vswitch2** | ||
|
||
**Basic** | ||
|
||
CPU:1 | ||
|
||
Memory:2 | ||
|
||
**Volumes** | ||
|
||
Image:Enter your cloudimg, for example: noble-server-cloudimg-amd64 | ||
|
||
**Networks** | ||
|
||
Network: default/vswitch1 | ||
|
||
**Advanced Options** | ||
``` | ||
users: | ||
|
||
` `- name: ubuntu | ||
|
||
` `groups: [ sudo ] | ||
|
||
` `shell: /bin/bash | ||
|
||
` `sudo: ALL=(ALL) NOPASSWD:ALL | ||
|
||
` `lock\_passwd: false | ||
|
||
``` | ||
**Note: Once the VM is running, you will see the Node displaying the NTP server -> 0.suse.pool.ntp.org and the IP address.** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
**4.Creat VM** | |
**Name: vm1-vswitch1** | |
**Basic** | |
CPU:1 | |
Memory:2 | |
**Volumes** | |
Image:Enter your cloudimg, for example: noble-server-cloudimg-amd64 | |
**Networks** | |
Network: default/vswitch1 | |
**Advanced Options** | |
``` | |
users: | |
` `- name: ubuntu | |
` `groups: [ sudo ] | |
` `shell: /bin/bash | |
` `sudo: ALL=(ALL) NOPASSWD:ALL | |
` `lock\_passwd: false | |
``` | |
**Name: vm2-vswitch1** | |
**Basic** | |
CPU:1 | |
Memory:2 | |
**Volumes** | |
Image:Enter your cloudimg, for example: noble-server-cloudimg-amd64 | |
**Networks** | |
Network: default/vswitch1 | |
**Advanced Options** | |
``` | |
users: | |
` `- name: ubuntu | |
` `groups: [ sudo ] | |
` `shell: /bin/bash | |
` `sudo: ALL=(ALL) NOPASSWD:ALL | |
` `lock\_passwd: false | |
``` | |
**Name: vm1-vswitch2** | |
**Basic** | |
CPU:1 | |
Memory:2 | |
**Volumes** | |
Image:Enter your cloudimg, for example: noble-server-cloudimg-amd64 | |
**Networks** | |
Network: default/vswitch1 | |
**Advanced Options** | |
``` | |
users: | |
` `- name: ubuntu | |
` `groups: [ sudo ] | |
` `shell: /bin/bash | |
` `sudo: ALL=(ALL) NOPASSWD:ALL | |
` `lock\_passwd: false | |
``` | |
**Note: Once the VM is running, you will see the Node displaying the NTP server -> 0.suse.pool.ntp.org and the IP address.** | |
1. Create three virtual machines (`vm1-vswitch1`, `vm2-vswitch1`, and `vm1-vswitch2`) with the following configuration: | |
- **Basics** tab | |
- **CPU**: `1` | |
- **Memory**: `2` | |
- **Volumes** tab | |
- **Image Volume**: A cloud image (for example, `noble-server-cloudimg-amd64`) | |
- **Networks** tab | |
- **Network**: `default/vswitch1` | |
- **Advanced Options** tab | |
``` | |
users: | |
` `- name: ubuntu | |
` `groups: [ sudo ] | |
` `shell: /bin/bash | |
` `sudo: ALL=(ALL) NOPASSWD:ALL | |
` `lock\_passwd: false | |
``` | |
:::note | |
Once the virtual machines start running, the node displays the NTP server `0.suse.pool.ntp.org` and the IP address.** | |
::: |
docs/networking/kubeovn-vpc.md
Outdated
**5.** | ||
|
||
Open the **serial console** of **vm1-vswitch1 (172.20.10.6)** and ping **vm1-vswitch2 (172.20.20.3)**. | ||
|
||
It shows: **ping: connect: Network is unreachable.** | ||
|
||
**Adds a default route :** | ||
``` | ||
#sudo ip route add default via 172.20.10.1 dev enp1s0 | ||
``` | ||
**note: For any network traffic that doesn't match a more specific route, send it to the gateway 172.20.10.1 using the network interface enp1s0.** | ||
|
||
Open the **serial console** of **vm1-vswitch2 (172.20.20.3)** and ping **vm1-vswitch1 (172.20.10.6)**. | ||
|
||
It shows: **ping: connect: Network is unreachable.** | ||
|
||
**Adds a default route :** | ||
``` | ||
#sudo ip route add default via 172.20.10.1 dev enp1s0 | ||
``` | ||
**note: For any network traffic that doesn't match a more specific route, send it to the gateway** 172.20.20.1 **using the network interface** enp1s0**.** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
**5.** | |
Open the **serial console** of **vm1-vswitch1 (172.20.10.6)** and ping **vm1-vswitch2 (172.20.20.3)**. | |
It shows: **ping: connect: Network is unreachable.** | |
**Adds a default route :** | |
``` | |
#sudo ip route add default via 172.20.10.1 dev enp1s0 | |
``` | |
**note: For any network traffic that doesn't match a more specific route, send it to the gateway 172.20.10.1 using the network interface enp1s0.** | |
Open the **serial console** of **vm1-vswitch2 (172.20.20.3)** and ping **vm1-vswitch1 (172.20.10.6)**. | |
It shows: **ping: connect: Network is unreachable.** | |
**Adds a default route :** | |
``` | |
#sudo ip route add default via 172.20.10.1 dev enp1s0 | |
``` | |
**note: For any network traffic that doesn't match a more specific route, send it to the gateway** 172.20.20.1 **using the network interface** enp1s0**.** | |
1. Open the serial consoles of `vm1-vswitch1` (`172.20.10.6`) and `vm1-vswitch2` (`172.20.20.3`), and then add a default route on each using the following command: | |
``` | |
#sudo ip route add default via 172.20.10.1 dev enp1s0 | |
``` | |
If a virtual machine wants to send traffic to an unknown network (not in the local subnet), the traffic must be forwarded to the specified gateway IP using the specified network interface. In this example, both `vm1-vswitch1` and `vm1-vswitch2` must forward traffic to the gateway `172.20.10.1` using the network interface `enp1s0`. |
docs/networking/kubeovn-vpc.md
Outdated
Use vm1-vswitch2 (172.20.20.3) to ping vm1-vswitch1 (172.20.10.6) to verify connectivity. | ||
|
||
Use vm1-vswitch1 (172.20.10.6) to ping vm1-vswitch2 (172.20.20.3) to verify connectivity. | ||
|
||
**If the VM wants to send traffic to an unknown network (not in its local subnet), it will forward that traffic to the specified gateway IP using the specified network interface.** | ||
|
||
vm1-vswitch1 will send traffic via 172.20.10.1 through enp1s0. | ||
|
||
vm1-vswitch2 will send traffic via 172.20.20.1 through enp1s0. | ||
|
||
**This setup allows traffic to be forwarded properly through their gateways, enabling end-to-end connectivity.** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use vm1-vswitch2 (172.20.20.3) to ping vm1-vswitch1 (172.20.10.6) to verify connectivity. | |
Use vm1-vswitch1 (172.20.10.6) to ping vm1-vswitch2 (172.20.20.3) to verify connectivity. | |
**If the VM wants to send traffic to an unknown network (not in its local subnet), it will forward that traffic to the specified gateway IP using the specified network interface.** | |
vm1-vswitch1 will send traffic via 172.20.10.1 through enp1s0. | |
vm1-vswitch2 will send traffic via 172.20.20.1 through enp1s0. | |
**This setup allows traffic to be forwarded properly through their gateways, enabling end-to-end connectivity.** | |
1. Verify connectivity using the `ping` command. | |
- Use `vm1-vswitch1` (`172.20.10.6`) to ping `vm1-vswitch2` (`172.20.20.3`). | |
- Use `vm1-vswitch2` (`172.20.20.3`) to ping `vm1-vswitch1` (`172.20.10.6`). | |
If you do not add a default route before running the ping command, the console displays the message `ping: connect: Network is unreachable.`. |
docs/networking/harvester-network.md
Outdated
|
||
|
||
### How to use overlay network | ||
To create a new overlay network, go to the **Networks > VM Networks** page and click the **Create** button. You have to specify the name, select the type `OverlayNetwork`. You don't need to specify the cluster network since the overlay network is only enabled on the default management network. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@innobead See https://docs.harvesterhci.io/v1.6/advanced/addons/kubeovn-operator.
I will update the main add-on page in an upcoming PR. There are minor issues that need to be fixed in recently merged PRs.
docs/networking/kubeovn-vpc.md
Outdated
|
||
1.Go to the Harvester UI. | ||
|
||
2.Navigate to Advanced > Networks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
docs/networking/kubeovn-vpc.md
Outdated
|
||
**Networks** | ||
|
||
Network: default/vswitch1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be default/vswitch2
?
docs/networking/kubeovn-vpc.md
Outdated
|
||
**Adds a default route :** | ||
``` | ||
#sudo ip route add default via 172.20.10.1 dev enp1s0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be 172.20.20.1
?
docs/networking/kubeovn-vpc.md
Outdated
|
||
Step 2: Create a Subnet and Link It to an Overlay Network | ||
|
||
1.Go to Virtual Private Cloud > Subnets. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
docs/networking/kubeovn-vpc.md
Outdated
|
||
***Test steps:*** | ||
|
||
**1.Creat Virtual Machine Networks** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Create
docs/networking/kubeovn-vpc.md
Outdated
|
||
|
||
|
||
**4.Creat VM** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Create
docs/networking/kubeovn-vpc.md
Outdated
| subnet2 | 20.0.0.0/24 | default/vswitch4 | 20.0.0.1 | | ||
|
||
|
||
**4.Edit Confic** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Config?
docs/networking/kubeovn-vpc.md
Outdated
VPC peering \ | ||
| Local Connect IP | Remote VPC | | ||
|-------------------|-------------| | ||
| 169.254.0.1/30 | vpcpeer-2 | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So Local Connect IP is a CIDR instead of a single IP?
Is there any restriction of the mask (e.g. /30, /28...) ?
And as I know, 169.254.0.x/30
is not belong to private address space, so we must use use 169.254.0.x/30
here?
https://datatracker.ietf.org/doc/html/rfc1918#section-3
docs/networking/kubeovn-vpc.md
Outdated
``` | ||
note: An 'Unschedulable' error typically indicates insufficient memory. Please stop other virtual machines before attempting to start this one again. | ||
|
||
**6.** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the title for point 6?
docs/networking/kubeovn-vpc.md
Outdated
|
||
- Open the serial console of vm1-vpcpeer1 (10.0.0.2) and adds a default route : | ||
``` | ||
#sudo ip route add default via 172.20.10.1 dev enp1s0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should 172.20.10.1
be 10.0.0.1
?
bac33ee
to
acdcfd1
Compare
new type of VM Network Signed-off-by: Chris Chiu <[email protected]>
8146ca6
to
6d9ecfa
Compare
e009a74
to
9026c76
Compare
docs/networking/kubeovn-vpc.md
Outdated
|
||
--- | ||
|
||
### Why use `169.254.0.x/30` instead of private IPs? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be 169.254.x.x/30
cuz it's /16 in RFC 3927?
docs/networking/kubeovn-vpc.md
Outdated
|
||
| CIDR | Next Hop IP | | ||
|--------------|--------------| | ||
| 20.0.0.0/16 | 169.254.0.2 | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO, it is best practice to use the same CIDR for static route configuration in vpc peering as subnet CIDR in the peer vpc. As users will generally require all hosts in the subnet to be reachable instead of a sub range.And CIDR cannot be larger than what is defined for actual subnet.
docs/networking/harvester-network.md
Outdated
- Overlay networks that are backed by Kube-OVN can only be created on `mgmt` (the built-in management network). | ||
- If a virtual machine is attached to a Kube-OVN overlay subnet, you must manually add the subnet’s gateway IP as the virtual machine's default route. Attempts to access external destinations fail until you add the route from within the guest operating system. | ||
- Underlay networking is still unavailable. Consequently, you cannot directly map a subnet to a physical network, and external hosts cannot reach virtual machines that live on an overlay subnet. | ||
- The `natOutgoing` field is set to `false` by default in all subnets whether they are created in the default VPC or in a user-defined VPC. If you do not change the value to `true`, virtual machines on the subnet are unable to reach the internet even when the gateway is correctly configured. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this claim conflict with docs/networking/kubeovn-vpc.md line 135 ?
Please help to check cuz this relates to [BUG] Custom Subnet created under default VPC has natOutgoing with default value false.
1e4ab25
to
4cf77ff
Compare
docs/networking/harvester-network.md
Outdated
### Limitations | ||
The overlay network implementation in Harvester v1.6 has the following limitations: | ||
- Overlay networks that are backed by Kube-OVN can only be created on `mgmt` (the built-in management network). | ||
- If a virtual machine is attached to a Kube-OVN overlay subnet, you must manually add the subnet’s gateway IP as the virtual machine's default route. Attempts to access external destinations fail until you add the route from within the guest operating system. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we please change this to instruct users via an example on how to use the managedtap
binding. The latest v1.6-head images allow managedtap
binding to be consumed
@Vicente-Cheng This PR only needs a final technical review before merging. I will just open another PR to fix language and markup issues. Getting the doc links is more important right now. |
34ada6c
to
b1fe9f4
Compare
b1fe9f4
to
7cab65a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, thanks!
Problem:
PR introduces the doc for new Virtual Private Cloud introduced from kubeovn
Solution:
Related Issue(s):
harvester/harvester#8527
harvester/harvester#8690
Test plan:
Additional documentation or context