It is possible to set remote
for GRE tunnel to a multicast
address. Such tunnel becomes broadcast tunnel (though word
tunnel is not quite appropriate in this case, it is rather virtual network).
ip tunnel add Universe local 193.233.7.65 \ remote 224.66.66.66 ttl 16 ip addr add 10.0.0.1/16 dev Universe ip link set Universe upThis tunnel is true broadcast network and broadcast packets are sent to multicast group 224.66.66.66. By default such tunnel starts to resolve both IP and IPv6 addresses via ARP/NDISC, so that if multicast routing is supported in surrounding network, all GRE nodes will find one another automatically and will form virtual Ethernet-like broadcast network. If multicast routing does not work, it is unpleasant but not fatal flaw. The tunnel becomes NBMA rather than broadcast network. You may disable dynamic ARPing by:
echo 0 > /proc/sys/net/ipv4/neigh/Universe/mcast_solicitand to add required information to ARP tables manually:
ip neigh add 10.0.0.2 lladdr 128.6.190.2 dev Universe nud permanentIn this case packets sent to 10.0.0.2 will be encapsulated in GRE and sent to 128.6.190.2. It is possible to facilitate address resolution using methods typical for another NBMA networks f.e. to start user level
arpd
daemon, which will maintain database of hosts attached
to GRE virtual network or ask for information
dedicated ARP or NHRP server.
Actually, such setup is the most natural for tunneling,
it is really flexible, scalable and easily managable, so that
it is strongly recommended to be used with GRE tunnels instead of ugly
hack with NBMA mode and onlink
modifier. Unfortunately,
by historical reasons broadcast mode is not supported by IPIP tunnels,
but this probably will change in future.