Comware IRFs – A Few Lessons

Look in almost any organisation that’s been running for more than a few years and you’ll see legacy IT kit still keeping business critical services alive. There are places where IPX is still a thing today (no, really). Heck, when I joined a previous employer in 2010, I was tasked with keeping a locally critical DOS based playout system going (it’s since been decommissioned).
Today, it was an old 3Com ring of switches that appeared on my radar. These are configured in what they call an IRF. It’s basically a cluster of switches that act as a single management node.
By configuring them in a ring topology, you gain resilience. Should any single node fail, the rest will stay up. Sounds brilliant.
And it should be. Except our works would see the removal of a node from the ring (not a big issue) and a re-ordering of some other nodes (a bigger issue that couldn’t be avoided).
In order to re-order nodes in a ring, you need to break it in at least two. Anyone that’s worked with clusters in the past should have alarm bells ringing in their head right now. That’s because in such a situation you’ll end up with a split brain – your separate parts believe they are the cluster and continue to operate.
Thankfully, in a lot of systems, there are checks made before it’s confirmed the units that can see each other are part of the main cluster. By default with a lot of Windows clustering services, you need to see over 50% of the configured cluster nodes. You can imagine the issues that caused in a two node cluster until a witness disk was made part of the setup.
This is not so with network switches. It’s expected the configuration and arrangement of nodes is somewhat dynamic. That meant we had two different groups of switches believing they were the cluster. When it came time to bring them back together in a different order, they both need to agree on the configuration of the cluster.
In our experience, the side with the most nodes seems to win (though that could also be due to having higher IRF master priorities). The other side will reboot and re-join the cluster. That’s the sort of thing you have planned outages for.
Another issue we ran in to recently saw a single node in a two switch cluster lose its configuration. It was traced down to running out of disk space during a firmware upgrade but resulted in a blank switch at the end of the reboot. You’d expect adding the IRF configuration and connecting to the live switch would result in fixing the issue. Lets just say we were glad to have backups when the live switch rebooted.

You may also like...