A couple of weeks ago I spotted a really suspicious flow in the logs that directly made me think of a bug at the controller. A packet was arriving to a port of a switch and was getting output from the very same port of the switch. This was kind of weird. If it is supposed to be transmitted back again, then why was it ended up here in the first hand? Was it because of a bug at the controller that first redirected the packet to the switch and then redirected it backwards to the previous switch? A logical fault at the controller was definitely the first suspect here. But later it turned out that this was not the case. The details are as follows.
Consider the above topology, where three switches
s3 are connected linearly and all together managed by the controller
c1. In addition,
s2 is connected to two hosts
h2, respectively. The controller, switches, hosts, their connections, and connection port numbers are given in the figure. The scenario generating the subtle gotcha is as follows.
h2sends a packet destined to
h1, whose attachment point is not learnt yet by
s2to flood the request.
s1sends the packet to
c1still does not know the attachment point of
s1to flood the request for a second time.
c1learns the attachment point of
s2s flood (see 3rd step) reaches to
c1knows the attachment point of the
c1figures out that the route to
s1is reachable by
s3to output the packet from the very same input port of the packet.
I do not have a single idea about the probability of such a subtle bug, but somehow it happened to exist in my logs, right at the lines I was just taking a glance and wasted my whole day. The saying was right, network is really unreliable. Gorgeous!