Containers in Pantavisor have always had a network namespace, but actually configuring it has been the user’s problem. You either left it on the host network (simple but brittle), wrote a pre-start.sh to bring up an interface (works, but every container reinvents it), or hand-rolled lxc.net.* lines into lxc.container.conf (works once, breaks the day you change bridges or subnets). For teams running fleets of devices with multiple containers that need to talk to each other, this got painful fast.
IPAM (IP Address Management) changes that. You declare named network pools in device.json once — bridge, subnet, gateway, optional NAT — and any container can opt into a pool with a single PV_NETWORK_POOL reference in its args.json. Pantavisor allocates the bridge, hands the container a stable IP, sets up MASQUERADE if you asked for it, and gets out of the way. Restart the container, reboot the device, run an OTA — the container gets the same IP back.
The Concept
A pool is a named L2/L3 segment. Define one (or several) per device:
{
"network": {
"pools": {
"internal": {
"type": "bridge",
"bridge": "pvbr0",
"subnet": "10.0.5.0/24",
"gateway": "10.0.5.1",
"nat": true
}
}
}
}
A container then references the pool by name. Pantavisor takes care of bridge creation, address allocation, and namespace plumbing:
{
"PV_NETWORK_POOL": "internal",
"PV_NETWORK_HOSTNAME": "net-server"
}
That’s it. The container comes up on pvbr0 with the next free address in 10.0.5.0/24 and can talk to any other container on the same pool. Because nat: true, it can also reach the outside world through the host’s default route.
Stable Leases
Allocations are keyed by (pool_name, container_name). When pantavisor sees a container by that name come up again — whether from a pvcontrol containers stop / start, from auto-recovery after a crash, or after a reboot — it reuses the existing lease instead of handing out a fresh address. Your services keep their identity across the operational lifecycle.
Per-Pool NAT
The nat flag is independent per pool. Want some containers internet-reachable and others on an isolated lab subnet? Define two pools:
{
"network": {
"pools": {
"internal": { "bridge": "pvbr0", "subnet": "10.0.5.0/24", "gateway": "10.0.5.1", "nat": true },
"lab": { "bridge": "pvbr1", "subnet": "10.0.6.0/24", "gateway": "10.0.6.1", "nat": false }
}
}
}
Only internal gets a MASQUERADE rule. Containers on lab can reach each other and the bridge gateway, but their packets are not source-NAT’d outbound — perfect for hardware-in-the-loop scenarios where you don’t want test traffic leaking onto the LAN.
Static Reservations
Pool allocation is dynamic by default, but you can pin a specific address by adding PV_NETWORK_IP:
{
"PV_NETWORK_POOL": "internal",
"PV_NETWORK_IP": "10.0.5.50"
}
Pantavisor honours the request as long as the IP is in the pool’s subnet and not already leased.
Coexistence with Legacy Containers
If you already have containers that bake lxc.net.0.ipv4.address directly into their lxc.container.conf (the pre-IPAM way), pantavisor scans for them at startup and reserves their addresses out of the pool before any dynamic allocation happens. New pool-using containers on the same subnet won’t be handed a colliding IP. You can migrate one container at a time without flag-day cutover.
NAT Backend: nftables Preferred
Pantavisor probes for nft and iptables at runtime and prefers nftables when available. The default Pantavisor appengine image now ships with nftables installed — no iptables binary needed (every distro kernel from 2014 onwards has the nf_tables backend). If you have a custom image without nftables, pantavisor falls back to iptables automatically.
Validation: Fail Fast, Refuse to Start
Two pre-start checks catch misconfigurations before any namespace work happens:
- Unknown pool reference — if a container’s
PV_NETWORK_POOLnames a pool that isn’t indevice.json, pantavisor refuses to start it. In a TESTING update this triggers rollback; in steady state, reboot. - Baked
lxc.net.*+ pool reference — a container can declare an IPAM pool or bake its ownlxc.net.*config, but not both. Mixing them would silently leak orphan attributes when pantavisor rewrites the netdev type. Thevalidate_confighook refuses such containers with a clear log line.
Default Pool: pvcnet
Every BSP shipped from meta-pantavisor now includes a default pvcnet pool in device.json:
{
"network": {
"pools": {
"pvcnet": {
"type": "bridge",
"bridge": "lxcbr0",
"subnet": "10.0.3.0/24",
"gateway": "10.0.3.1",
"nat": true
}
}
}
}
It binds to lxcbr0 for compatibility with any pre-existing lxc-native containers that might already be on that bridge — the reservation walk above means they keep working unchanged. Set PV_NETWORK_POOL: \"pvcnet\" on a new container and it gets a 10.0.3.x address with internet access out of the box.
How to Adopt
- Pull a recent build of
meta-pantavisormaster onto your device. The defaultpvcnetpool ships in the BSP. - For new containers, add
PV_NETWORK_POOLtoargs.json:
Optional:{ \"PV_NETWORK_POOL\": \"pvcnet\" }PV_NETWORK_HOSTNAME,PV_NETWORK_IP,PV_NETWORK_MAC. - For new pools, add an entry under
network.poolsindevice.json. Pantavisor creates the bridge on next start. - For migration from
lxc.net.*-baked containers, drop the baked entries and switch toPV_NETWORK_POOL. Or leave them as-is — the reservation walk keeps them working.
Reference Containers in meta-pantavisor
Working examples for every IPAM scenario live in recipes-containers/pv-examples/ — copy a recipe, swap in your own payload:
| Recipe | Demonstrates |
|---|---|
pv-example-device-ipam |
Single-pool device.json (internal, 10.0.5.0/24, NAT) |
pv-example-device-ipam-2pools |
Two pools — one with NAT, one without |
pv-example-device-ipam-lxcbr |
Pool bound to lxcbr0 (legacy-coexistence) |
pv-example-net-server |
Pool consumer with PV_NETWORK_HOSTNAME |
pv-example-net-client |
Pool consumer doing a TCP connect |
pv-example-ipam-valid |
Static reservation via PV_NETWORK_IP |
pv-example-ipam-static |
Legacy lxc-native container with baked lxc.net.0.ipv4.address |
pv-example-net-pvcnet |
Pool-using container on the default pvcnet pool |
pv-example-ipam-nopool |
Negative case — references a non-existent pool |
The 9-test IPAM testplan walks through each scenario end-to-end against the appengine image.
What Landed
- pantavisor — IPAM subsystem (
pv_ipam_*), reservation walk, validate_config hook, nftables-preferred NAT setup. - meta-pantavisor — example containers, default
pvcnetpool in every BSPdevice.json, appenginenftablesinstall,network.jsonsupport incontainer-pvrexport.bbclass, full testplan.
Try It Out
On a device with the latest meta-pantavisor build:
ssh -p 8222 _pv_@<device-ip>
ip addr show lxcbr0 # Default pvcnet bridge: 10.0.3.1/24
nft list ruleset # MASQUERADE rule for 10.0.3.0/24
Deploy a pool-using container (any container, just add \"PV_NETWORK_POOL\": \"pvcnet\" to its args.json):
pvcontrol ls # See the assigned IP
pvcontrol containers stop my-app
pvcontrol containers start my-app
pvcontrol ls # Same IP — lease was reused
Look for these in the pantavisor log:
[ipam] created bridge lxcbr0 with IP 10.0.3.1/24
[ipam] setup NAT (nftables) for pool pvcnet
[ipam] allocated 10.0.3.2/24 to my-app from pool pvcnet
[ipam] reusing existing lease for my-app: 10.0.3.2/24
Coming in 028
IPAM lands in the Pantavisor 028 release, where it ships in the prebuilt binary images you can flash directly onto a Raspberry Pi or any of the supported boards — no Yocto build required to try it out.
If you want to play with it sooner: watch this space for a 028-rcX drop in the coming days. The release candidates carry the same IPAM bits as 028 final and are the easiest way to kick the tires before the GA images go up.
Links
- meta-pantavisor PR #180 — example containers + default pool + testplan
- pantavisor PR #613 — IPAM subsystem (merged)
- IPAM testplan — 9 scenarios with expected results
- Example container recipes