Working as a Security Consultant, I’m less involved in Solaris administration tasks. Anyway, for some customers, I still need to manage servers running Solaris 10.
One of the greatest features Solaris 10 introduced is the “zones” concept (to keep things easy, it’s the virtualization mechanism introduced by SUN). I already explained in a previous post how to deal with zones connected to multiple VLANs. Today, I faced a strange issue…
Let’s imagine a Solaris server with one zone:
- Zone 0 (default zone) is connected on VLAN 10.10.0.x/24 via e1000g0
- Zone 1 is connected on VLAN 10.10.20.x/24 via e1000g1
- Default gateway of zone0 is 10.10.0.1
- Default gateway of zone 1 is 10.10.20.1
To configure a load-balancer in triangulation mode, I added an IP (10.10.0.130) on the loopback interface of zone 0:
zone0# ifconfig -a lo0: flags=2001000849mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 lo0:30: flags=2001000849 mtu 8232 index 1 inet 10.10.0.130 netmask ffffff00 e1000g0:1: flags=1000843 mtu 1500 index 2 zone app2 inet 10.10.0.221 netmask ffffff00 broadcast 10.10.0.255 zone0#
Zone 1 as the following routing table:
zone1# netstat -rn Routing Table: IPv4 Destination Gateway Flags Ref Use Interface -------------------- -------------------- ----- ----- ------ --------- 10.10.20.0 10.10.20.100 U 1 230919 e1000g1:1 224.0.0.0 10.10.20.100 U 1 0 e1000g1:1 default 10.10.20.1 UG 11255328 127.0.0.1 127.0.0.1 UH 5 278090 lo0:6 zone1#
The same IP (10.10.0.130) was configured on the load-balancer as a VIP. The zone1 was configured as a client for the application load-balanced via 10.10.0.130.
Result? The zone 1 will never use it’s routing table to reach 10.10.0.130, the packet will be directly passed to the zone 0!