Skip to content

Conversation

mingshuoqiu
Copy link
Contributor

@mingshuoqiu mingshuoqiu commented Aug 5, 2025

Problem:

PR introduces the doc for new Virtual Private Cloud introduced from kubeovn

Solution:

Related Issue(s):

harvester/harvester#8527
harvester/harvester#8690

Test plan:

Additional documentation or context

Copy link

github-actions bot commented Aug 5, 2025

Name Link
🔨 Latest commit 333161c
😎 Deploy Preview https://68aef257de018b6970f1779f--harvester-preview.netlify.app

Copy link

@albinsun albinsun Aug 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

questions:

  1. Should Vm be VM?
  2. There are 3 VM1 in the left side?
  3. Are vpcpeer1 and vpcpeer2 just symbols here to represent it supports VPN peering? Because there is no lines or other things defines those peerings relationship.



### How to use overlay network
To create a new overlay network, go to the **Networks > VM Networks** page and click the **Create** button. You have to specify the name, select the type `OverlayNetwork`. You don't need to specify the cluster network since the overlay network is only enabled on the default management network.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add UI screenshots to explain how to create an overlay network, VPC, subnet, etc.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @ibrokethecloud

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@innobead See https://docs.harvesterhci.io/v1.6/advanced/addons/kubeovn-operator.
I will update the main add-on page in an upcoming PR. There are minor issues that need to be fixed in recently merged PRs.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@innobead If you meant the kubeovn-operator add-on, it's at https://docs.harvesterhci.io/v1.6/advanced/addons/kubeovn-operator.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One thing to note is that if the kubeovn-operator add-on is classified as experimental, we don't need to put it to the list on the Addon main page.

• Underlay networking is not yet implemented, so there is no way to map a subnet directly to a physical network. Consequently, external hosts cannot reach VMs that live on an overlay subnet.
• Any subnet created in a user-defined VPC has natOutgoing: false by default. The field must be manually set to true; otherwise, VMs on the subnet will not be able to reach the Internet even when the gateway is correctly configured.

Future roadmap
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please create issues for the following items and mention them.

Copy link
Contributor

@rrajendran17 rrajendran17 Aug 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mingshuoqiu please also include the limitation of external connectivity from VM is possible only on subnets connected via default vpc (ovn-cluster) harvester/harvester#8690

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mingshuoqiu One more observation is, Similar to other secondary interfaces in VMs, multiple interfaces can be created on a VM with type overlay.And always the first interface in the guest os will be up and ip address will be assigned unless specified in cloud init.So in case of multiple interfaces on a VM either the secondary interfaces and dhcp must be provided in cloud init or user has to explicitly bring up the interfaces and run dhclient to obtain ip address.In case of overlay networks, the dhcp server is not external but the dhcp running in the ovn-cluster for ip allocation.

So since the pods get the ip address, the UI will show the ip addresses of all interfaces in VM using overlay networks though multus, but actual guest os interface will not have ip address unless user executes "ip link set dev enp2s0 up" and "dhclient enp2s0". I think users must be aware of how ip gets allocated for overlay networks and they need dhclient for secondary interfaces.

@@ -0,0 +1,667 @@
## **Virtual Private Cloud (VPC) - Concepts & Architecture**
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use the same format as other sub-pages in the networking folder to make this page the entry point for KubeOVN.

In this case, this should be a page titled Overlay VM Network or something similar.

Image

cc @harvester/network

| L2 | Overlay Network | `vswitch1` | Virtual Layer 2 switch that connects VMs; carries subnet traffic. |
| L2/L3 | Virtual Machine | `vm1-vswitch1` | Attached to an Overlay Network; receives IP/Gateway from its subnet. |

#### ASCII Diagram
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Use a diagram drawing instead of ASCII for consistency with other pages. May need @jillian-maroket help here in the future.


*Note:You must enable `kubeovn-operator` to deploy Kube-OVN to a Harvester cluster for advanced SDN capabilities such as virtual private cloud (VPC) and subnets for virtual machine workloads.

1. On the Harvester UI, go to **Advanced** > **Add-ons**.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Add UI screenshots to the tutorial steps.




***Test steps:***
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test steps should be added to the validating issue for QA. If you want to include examples for primary features here, you should call them out as examples instead of test steps.

Add these use cases in a specific section instead to let users understand what functions are provided. Mixing them in VPC Components Overview is confusing.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@innobead I added the examples to the VPC creation and configuration procedure and repurposed the validation part.

Copy link
Contributor

@jillian-maroket jillian-maroket left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mingshuoqiu I completely restructured and rewrote the main sections. Please check what I did and try to apply the same structure in the last 3 sections. The information you provided is useful. We just need to repackage most of it.


## Overlay Network

The [Harvester network-controller](https://github.com/harvester/harvester-network-controller) leverages the [kube-ovn] (https://github.com/kubeovn/kube-ovn) to create OVN-based Virtualized Network and provide a bridge for connection. It helps to connect your VMs to the virtualized network which supports the VPC (Virtual Private Cloud) and Subnet to provide SDN features like Multi-Tenancy, Micro-Segmentation, Isolation...etc. The overlay network can be attached to the Subnet created in Virtual Private Cloud so that VM can access the internal virtualized network and reach the external network. However, the VM can not be accessed by external network like VLAN and Untagged network due to the current limitation of the Virtual Private Cloud.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The [Harvester network-controller](https://github.com/harvester/harvester-network-controller) leverages the [kube-ovn] (https://github.com/kubeovn/kube-ovn) to create OVN-based Virtualized Network and provide a bridge for connection. It helps to connect your VMs to the virtualized network which supports the VPC (Virtual Private Cloud) and Subnet to provide SDN features like Multi-Tenancy, Micro-Segmentation, Isolation...etc. The overlay network can be attached to the Subnet created in Virtual Private Cloud so that VM can access the internal virtualized network and reach the external network. However, the VM can not be accessed by external network like VLAN and Untagged network due to the current limitation of the Virtual Private Cloud.
The [Harvester network-controller](https://github.com/harvester/harvester-network-controller) leverages [Kube-OVN] (https://github.com/kubeovn/kube-ovn) to create an OVN-based virtualized network that supports advanced SDN capabilities such as virtual private cloud (VPC) and subnets for virtual machine workloads.
An overlay network represents a virtual layer 2 switch that encapsulates and forwards traffic between virtual machines. This network can be linked to the subnet created in the VPC so that virtual machines can access the internal virtualized network and also reach the external network. However, the same virtual machines cannot be accessed by external networks such as VLANs and untagged networks because of current VPC limitations.
![](/img/kubeovn-harvester-topology.png)

Comment on lines 123 to 124
### How to use overlay network
To create a new overlay network, go to the **Networks > VM Networks** page and click the **Create** button. You have to specify the name, select the type `OverlayNetwork`. You don't need to specify the cluster network since the overlay network is only enabled on the default management network.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### How to use overlay network
To create a new overlay network, go to the **Networks > VM Networks** page and click the **Create** button. You have to specify the name, select the type `OverlayNetwork`. You don't need to specify the cluster network since the overlay network is only enabled on the default management network.
### Create an Overlay Network
1. Go to **Networks > Virtual Machine Networks**, and then click **Create**.
1. On the **Virtual Machine Network:Create** screen, specify a name for the network.
1. On the **Basics** tab, select `OverlayNetwork` as the network type.
Specifying a cluster network is not required because the overlay network is only enabled on `mgmt` (the built-in management network).
1. Click **Create**.

### How to use overlay network
To create a new overlay network, go to the **Networks > VM Networks** page and click the **Create** button. You have to specify the name, select the type `OverlayNetwork`. You don't need to specify the cluster network since the overlay network is only enabled on the default management network.

The overlay network will act as the `Provider` of the Subnet which is created in `Virtual Private Cloud`. Each Subnet must be mapped to exactly one Overlay Network, and vice versa (1:1 relationship).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The overlay network will act as the `Provider` of the Subnet which is created in `Virtual Private Cloud`. Each Subnet must be mapped to exactly one Overlay Network, and vice versa (1:1 relationship).
The overlay network functions as the provider of the subnet that is created in the virtual private cloud. Because of this, each subnet must be mapped to only one overlay network, and each overlay network can be used by only one subnet. This one-to-one relationship ensures that routing behavior is clear and predictable, subnets are isolated, and routing conflicts and traffic leakage are avoided.

Comment on lines 128 to 134
:::note
Current limitation in Harvester 1.6
• Overlay networks backed by Kube-OVN can only be created on the default cluster - management network.
• Creating an overlay network on any newly created ClusterNetwork is not supported in this release.
• VMs attached to a Kube-OVN overlay subnet must manually add the subnet’s gateway IP as their default route; the DHCP offer does not automatically install the route, so external access fails until the user fixes it inside the guest OS.
• Underlay networking is not yet implemented, so there is no way to map a subnet directly to a physical network. Consequently, external hosts cannot reach VMs that live on an overlay subnet.
• Any subnet created in a user-defined VPC has natOutgoing: false by default. The field must be manually set to true; otherwise, VMs on the subnet will not be able to reach the Internet even when the gateway is correctly configured.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
:::note
Current limitation in Harvester 1.6
• Overlay networks backed by Kube-OVN can only be created on the default cluster - management network.
• Creating an overlay network on any newly created ClusterNetwork is not supported in this release.
• VMs attached to a Kube-OVN overlay subnet must manually add the subnet’s gateway IP as their default route; the DHCP offer does not automatically install the route, so external access fails until the user fixes it inside the guest OS.
• Underlay networking is not yet implemented, so there is no way to map a subnet directly to a physical network. Consequently, external hosts cannot reach VMs that live on an overlay subnet.
• Any subnet created in a user-defined VPC has natOutgoing: false by default. The field must be manually set to true; otherwise, VMs on the subnet will not be able to reach the Internet even when the gateway is correctly configured.
### Limitations
The overlay network implementation in Harvester v1.6 has the following limitations:
- Overlay networks that are backed by Kube-OVN can only be created on `mgmt` (the built-in management network).
- If a virtual machine is attached to a Kube-OVN overlay subnet, you must manually add the subnet’s gateway IP as the virtual machine's default route. Attempts to access external destinations fail until you add the route from within the guest operating system.
- Underlay networking is still unavailable. Consequently, you cannot directly map a subnet to a physical network, and external hosts cannot reach virtual machines that live on an overlay subnet.
- The `natOutgoing` field is set to `false` by default in any subnet that is created in a user-defined VPC. If you do not change the value to `true`, virtual machines on the subnet are unable to reach the internet even when the gateway is correctly configured.

Comment on lines 136 to 141
Future roadmap
• Support for provisioning overlay networks on user-defined ClusterNetworks is targeted for a later release.
• DHCP default-route injection
• Underlay networking support
• Outbound-NAT default policy in user VPCs
:::
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We only document implemented features. Mentioning roadmap items is tricky because it might set unrealistic expectations. I suggest removing this part.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good.




***Test steps:***
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@innobead I added the examples to the VPC creation and configuration procedure and repurposed the validation part.

Comment on lines 261 to 252
**4.Creat VM**

**Name: vm1-vswitch1**

**Basic**

CPU:1

Memory:2

**Volumes**

Image:Enter your cloudimg, for example: noble-server-cloudimg-amd64

**Networks**

Network: default/vswitch1

**Advanced Options**
```
users:

` `- name: ubuntu

` `groups: [ sudo ]

` `shell: /bin/bash

` `sudo: ALL=(ALL) NOPASSWD:ALL

` `lock\_passwd: false

```
**Name: vm2-vswitch1**

**Basic**

CPU:1

Memory:2

**Volumes**

Image:Enter your cloudimg, for example: noble-server-cloudimg-amd64

**Networks**

Network: default/vswitch1

**Advanced Options**
```
users:

` `- name: ubuntu

` `groups: [ sudo ]

` `shell: /bin/bash

` `sudo: ALL=(ALL) NOPASSWD:ALL

` `lock\_passwd: false

```
**Name: vm1-vswitch2**

**Basic**

CPU:1

Memory:2

**Volumes**

Image:Enter your cloudimg, for example: noble-server-cloudimg-amd64

**Networks**

Network: default/vswitch1

**Advanced Options**
```
users:

` `- name: ubuntu

` `groups: [ sudo ]

` `shell: /bin/bash

` `sudo: ALL=(ALL) NOPASSWD:ALL

` `lock\_passwd: false

```
**Note: Once the VM is running, you will see the Node displaying the NTP server -> 0.suse.pool.ntp.org and the IP address.**
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
**4.Creat VM**
**Name: vm1-vswitch1**
**Basic**
CPU:1
Memory:2
**Volumes**
Image:Enter your cloudimg, for example: noble-server-cloudimg-amd64
**Networks**
Network: default/vswitch1
**Advanced Options**
```
users:
` `- name: ubuntu
` `groups: [ sudo ]
` `shell: /bin/bash
` `sudo: ALL=(ALL) NOPASSWD:ALL
` `lock\_passwd: false
```
**Name: vm2-vswitch1**
**Basic**
CPU:1
Memory:2
**Volumes**
Image:Enter your cloudimg, for example: noble-server-cloudimg-amd64
**Networks**
Network: default/vswitch1
**Advanced Options**
```
users:
` `- name: ubuntu
` `groups: [ sudo ]
` `shell: /bin/bash
` `sudo: ALL=(ALL) NOPASSWD:ALL
` `lock\_passwd: false
```
**Name: vm1-vswitch2**
**Basic**
CPU:1
Memory:2
**Volumes**
Image:Enter your cloudimg, for example: noble-server-cloudimg-amd64
**Networks**
Network: default/vswitch1
**Advanced Options**
```
users:
` `- name: ubuntu
` `groups: [ sudo ]
` `shell: /bin/bash
` `sudo: ALL=(ALL) NOPASSWD:ALL
` `lock\_passwd: false
```
**Note: Once the VM is running, you will see the Node displaying the NTP server -> 0.suse.pool.ntp.org and the IP address.**
1. Create three virtual machines (`vm1-vswitch1`, `vm2-vswitch1`, and `vm1-vswitch2`) with the following configuration:
- **Basics** tab
- **CPU**: `1`
- **Memory**: `2`
- **Volumes** tab
- **Image Volume**: A cloud image (for example, `noble-server-cloudimg-amd64`)
- **Networks** tab
- **Network**: `default/vswitch1`
- **Advanced Options** tab
```
users:
` `- name: ubuntu
` `groups: [ sudo ]
` `shell: /bin/bash
` `sudo: ALL=(ALL) NOPASSWD:ALL
` `lock\_passwd: false
```
:::note
Once the virtual machines start running, the node displays the NTP server `0.suse.pool.ntp.org` and the IP address.**
:::

Comment on lines 358 to 274
**5.**

Open the **serial console** of **vm1-vswitch1 (172.20.10.6)** and ping **vm1-vswitch2 (172.20.20.3)**.

It shows: **ping: connect: Network is unreachable.**

**Adds a default route :**
```
#sudo ip route add default via 172.20.10.1 dev enp1s0
```
**note: For any network traffic that doesn't match a more specific route, send it to the gateway 172.20.10.1 using the network interface enp1s0.**

Open the **serial console** of **vm1-vswitch2 (172.20.20.3)** and ping **vm1-vswitch1 (172.20.10.6)**.

It shows: **ping: connect: Network is unreachable.**

**Adds a default route :**
```
#sudo ip route add default via 172.20.10.1 dev enp1s0
```
**note: For any network traffic that doesn't match a more specific route, send it to the gateway** 172.20.20.1 **using the network interface** enp1s0**.**
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
**5.**
Open the **serial console** of **vm1-vswitch1 (172.20.10.6)** and ping **vm1-vswitch2 (172.20.20.3)**.
It shows: **ping: connect: Network is unreachable.**
**Adds a default route :**
```
#sudo ip route add default via 172.20.10.1 dev enp1s0
```
**note: For any network traffic that doesn't match a more specific route, send it to the gateway 172.20.10.1 using the network interface enp1s0.**
Open the **serial console** of **vm1-vswitch2 (172.20.20.3)** and ping **vm1-vswitch1 (172.20.10.6)**.
It shows: **ping: connect: Network is unreachable.**
**Adds a default route :**
```
#sudo ip route add default via 172.20.10.1 dev enp1s0
```
**note: For any network traffic that doesn't match a more specific route, send it to the gateway** 172.20.20.1 **using the network interface** enp1s0**.**
1. Open the serial consoles of `vm1-vswitch1` (`172.20.10.6`) and `vm1-vswitch2` (`172.20.20.3`), and then add a default route on each using the following command:
```
#sudo ip route add default via 172.20.10.1 dev enp1s0
```
If a virtual machine wants to send traffic to an unknown network (not in the local subnet), the traffic must be forwarded to the specified gateway IP using the specified network interface. In this example, both `vm1-vswitch1` and `vm1-vswitch2` must forward traffic to the gateway `172.20.10.1` using the network interface `enp1s0`.

Comment on lines 380 to 286
Use vm1-vswitch2 (172.20.20.3) to ping vm1-vswitch1 (172.20.10.6) to verify connectivity.

Use vm1-vswitch1 (172.20.10.6) to ping vm1-vswitch2 (172.20.20.3) to verify connectivity.

**If the VM wants to send traffic to an unknown network (not in its local subnet), it will forward that traffic to the specified gateway IP using the specified network interface.**

vm1-vswitch1 will send traffic via 172.20.10.1 through enp1s0.

vm1-vswitch2 will send traffic via 172.20.20.1 through enp1s0.

**This setup allows traffic to be forwarded properly through their gateways, enabling end-to-end connectivity.**
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Use vm1-vswitch2 (172.20.20.3) to ping vm1-vswitch1 (172.20.10.6) to verify connectivity.
Use vm1-vswitch1 (172.20.10.6) to ping vm1-vswitch2 (172.20.20.3) to verify connectivity.
**If the VM wants to send traffic to an unknown network (not in its local subnet), it will forward that traffic to the specified gateway IP using the specified network interface.**
vm1-vswitch1 will send traffic via 172.20.10.1 through enp1s0.
vm1-vswitch2 will send traffic via 172.20.20.1 through enp1s0.
**This setup allows traffic to be forwarded properly through their gateways, enabling end-to-end connectivity.**
1. Verify connectivity using the `ping` command.
- Use `vm1-vswitch1` (`172.20.10.6`) to ping `vm1-vswitch2` (`172.20.20.3`).
- Use `vm1-vswitch2` (`172.20.20.3`) to ping `vm1-vswitch1` (`172.20.10.6`).
If you do not add a default route before running the ping command, the console displays the message `ping: connect: Network is unreachable.`.



### How to use overlay network
To create a new overlay network, go to the **Networks > VM Networks** page and click the **Create** button. You have to specify the name, select the type `OverlayNetwork`. You don't need to specify the cluster network since the overlay network is only enabled on the default management network.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@innobead See https://docs.harvesterhci.io/v1.6/advanced/addons/kubeovn-operator.
I will update the main add-on page in an upcoming PR. There are minor issues that need to be fixed in recently merged PRs.


1.Go to the Harvester UI.

2.Navigate to Advanced > Networks.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be Networks > Virtual Machine Network ?
image


**Networks**

Network: default/vswitch1
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be default/vswitch2?


**Adds a default route :**
```
#sudo ip route add default via 172.20.10.1 dev enp1s0
Copy link

@albinsun albinsun Aug 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be 172.20.20.1 ?


Step 2: Create a Subnet and Link It to an Overlay Network

1.Go to Virtual Private Cloud > Subnets.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be Networks > Virtual Private Cloud > Create Subnet ?
image


***Test steps:***

**1.Creat Virtual Machine Networks**
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Create




**4.Creat VM**
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Create

| subnet2 | 20.0.0.0/24 | default/vswitch4 | 20.0.0.1 |


**4.Edit Confic**
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Config?

VPC peering \
| Local Connect IP | Remote VPC |
|-------------------|-------------|
| 169.254.0.1/30 | vpcpeer-2 |
Copy link

@albinsun albinsun Aug 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So Local Connect IP is a CIDR instead of a single IP?
Is there any restriction of the mask (e.g. /30, /28...) ?

And as I know, 169.254.0.x/30 is not belong to private address space, so we must use use 169.254.0.x/30 here?
https://datatracker.ietf.org/doc/html/rfc1918#section-3

```
note: An 'Unschedulable' error typically indicates insufficient memory. Please stop other virtual machines before attempting to start this one again.

**6.**
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the title for point 6?


- Open the serial console of vm1-vpcpeer1 (10.0.0.2) and adds a default route :
```
#sudo ip route add default via 172.20.10.1 dev enp1s0
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should 172.20.10.1 be 10.0.0.1?


---

### Why use `169.254.0.x/30` instead of private IPs?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be 169.254.x.x/30 cuz it's /16 in RFC 3927?


| CIDR | Next Hop IP |
|--------------|--------------|
| 20.0.0.0/16 | 169.254.0.2 |
Copy link
Contributor

@rrajendran17 rrajendran17 Aug 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO, it is best practice to use the same CIDR for static route configuration in vpc peering as subnet CIDR in the peer vpc. As users will generally require all hosts in the subnet to be reachable instead of a sub range.And CIDR cannot be larger than what is defined for actual subnet.

- Overlay networks that are backed by Kube-OVN can only be created on `mgmt` (the built-in management network).
- If a virtual machine is attached to a Kube-OVN overlay subnet, you must manually add the subnet’s gateway IP as the virtual machine's default route. Attempts to access external destinations fail until you add the route from within the guest operating system.
- Underlay networking is still unavailable. Consequently, you cannot directly map a subnet to a physical network, and external hosts cannot reach virtual machines that live on an overlay subnet.
- The `natOutgoing` field is set to `false` by default in all subnets whether they are created in the default VPC or in a user-defined VPC. If you do not change the value to `true`, virtual machines on the subnet are unable to reach the internet even when the gateway is correctly configured.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this claim conflict with docs/networking/kubeovn-vpc.md line 135 ?
image

Please help to check cuz this relates to [BUG] Custom Subnet created under default VPC has natOutgoing with default value false.

### Limitations
The overlay network implementation in Harvester v1.6 has the following limitations:
- Overlay networks that are backed by Kube-OVN can only be created on `mgmt` (the built-in management network).
- If a virtual machine is attached to a Kube-OVN overlay subnet, you must manually add the subnet’s gateway IP as the virtual machine's default route. Attempts to access external destinations fail until you add the route from within the guest operating system.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we please change this to instruct users via an example on how to use the managedtap binding. The latest v1.6-head images allow managedtap binding to be consumed

@jillian-maroket
Copy link
Contributor

@Vicente-Cheng This PR only needs a final technical review before merging. I will just open another PR to fix language and markup issues. Getting the doc links is more important right now.

@mingshuoqiu mingshuoqiu force-pushed the kubeovn-vpc-docs branch 16 times, most recently from 34ada6c to b1fe9f4 Compare August 27, 2025 10:32
Copy link
Contributor

@Vicente-Cheng Vicente-Cheng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, thanks!

@Vicente-Cheng Vicente-Cheng merged commit 189bbbd into harvester:main Aug 27, 2025
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants