In this video, I dig deeper into "Control plane Learning" approach for VXLAN using BGP EVPN. I focus on "Bridging" i.e INTRA-VNI communication with a modern Leaf-Spine topology on EVE-NG
I have searched various tutorial on VXLAN but got confused along the way, until I landed on this one. Now I understand the concepts of VxLAN and how to use this protocol. Thanks Engineer BitsPlease, you're Awesome 👍
amazingly detailed. one of the best videos on the subject ive seen. Goes through all the pieces in depth and very well explained. Thank you for creating this series.
Thank you BitsPlease for the great lessons about VxLAN. Your teaching is amazing to precisely summarize the knowledge required for VxLAN implementation. You've save me from insanity when trying to learn VxLAN :)
A big thankyou for this , you have explained in such simple terms ...it is so difficult to understand from documents ...thaks a lot ..will look forward to other videos related to vxlan evpn.
Thanks for the time spent making the video. It got a bit confusing toward the last part, at least for me. Specifically, I could not visually map what you explained about MAC-vrf and IP-vrf to the the configuration script.
Thanks Marco. Well in L2VNI you just have the MAC vrf in play. (IP vrf in the next video)And there isn't much to configure honestly with it as the l2vpn evpn address family config takes care of it. MAC Vrf is just a contruct to explain the fact that we are now exchanging MAC routes and inorder to distinguish all these MACs coming from different VLANs into the l2vpn evpn table, we need a RD just like the age old MP-BGP days. Hope that helps
Like your video. But having one clarification. Why do you use Loopback IP address for the physical interface links between Leaf and Spine? I thought loopback ip addresses are only for the VTEP ip addresses. Can we have different ip addresses for physical links and different loopback address for VTEPs?
as host sends the GARP Message, the local table of the Leaf-switch will learn the Mac-address of the host , but while sending the details to the Route-reflector(which is basically the Spines) ,it includes additionally the VTEP IP of the Leaf-switch where its connected.. Guys, agree with me ?
Great Video! very informative. though am still confused with the underlay and overlay concept, OSPF and multicast runs over underlay and VXLAN and BGP EVPN runs on overlay for control plane. does it mean data flows on overlay's data plane? if so what will happen if OSPF neighbourhood fails, data will still flow via Overlay via VXLAN and BGP EVPN????
Hello Joel, That was a very Good explanation on vxlan. I would like to know your homelab specs for building this lab topologies. Could you recommend a server or pc that could handle these topologies?
Can you please share the video where you discussed MP-BGP in detail. I checked your MPLS playlist but couldn't find in which vid you talked about MP-BGP
I have completed your entire VXLAN series, you explain so nicely. I just have question which you miss here like if I need device level redundancy for a host like in VPC how I achieve it in VXLAN
The 2 switches involved in VPC will act as a single VTEP from data plane perspective. BGP peering has to be set up individually from both the switches towards the spine.
Hi....thanks .... (1)vxlan tunnel created must go thru spine device or it will be between leaf-to-leaf? from IP perspective seem the the next hope is the spine. (2) Why we need mplsoudp in the overlay if vxlan is already leverage for overlay protocol...Thanks
1) Vxlan tunnel is created between leafs. But the underlay is via the spine since leafs are not directly connected to each other. 2) Can you re-phrase this one. I didn't get the question
@@BitsPlease (1) virtually the traffic traverse via tunnel is transparent to underlying physical topology right, meaning that more efficient and no hop.. can I conclude that? (2) If overlay network is already running with vxlan...why we need mplsoudp as I can understand from my readng some of the overlay network are using vxlan and mplsoudp as overlay transport.
Hi, one of the advantages of using VXLAN is unlimited vni compared to traditional 1-4095 vlans. I'm struggling to understand how VXLAN helps with this, as you still have to map VNI to traditional VLANs; this some how means you can only have as much VNIs as you have traditional VLANs. Another question, in this scenario say on leaf NXOS3 you already had 4095 hosts and each of them on separate VLAN, when you want to add a new host on the same Leaf on its separate VLAN how would you do it?
Every VLAN on any Leaf is locally significant. Now imagine 4 Leafs Leaf-1 has vlans 1-4095 mapped to VNI 1- 4095 Leaf-2 has vlans 1-4095 mapped to VNI 4096- 8190 Leaf-3 has vlans 1-4095 mapped to VNI 1- 4095 Leaf-4 has vlans 1-4095 mapped to VNI 4096- 8190 Technically we have increased the broadcast domains from 4095 to 8190. Haven’t we? Ex: VLAN 1 on Leaf 2 can talk to VLAN 1 on Leaf 4 using VNI 4096. No imagine 4 more switches with VNI going from 8191 - 16380. Similarly VXLAN can scale to 16 million Though the above is just an example, no one uses 4095 VLANs on a switch due to resource limitation.
@@BitsPlease with the mapping you have used how will host in vlan 1 on leaf 1 mapped to vni 1 be able to communicate to host in vlan 1 on leaf 2 mapped to vni 4096, assuming vni uses the same number as vn-segment
@@BitsPlease I’m looking to understand how this can be useful in my environment, I have a phyical interface on a router that has exhausted 4094 subinterfaces mapped to vlans. My solution to this problem is using another physical interface on the router and connect to a different switch to serve the hosts. Can this technology help there?
Also back to your problem, (if i understood it right) you can't really use VXLAN to increase the VLANs beyond 4095 cause VXLAN doesn't bypass that local VLAN switch limit. All it does is in a cloud multi-tenant environment, it gives you more scalability to re-use the VLAN numbers across multiple customers by distinct VNIs