Single vs Multi Channel Architectures for WiFi
Almost every area of IT has its holy wars. Whether it’s processors and AMD vs Intel vs ARM, GPU with AMD vs NVidia or even operating systems with Linux, Windows and even BSD in competition. In my own experience, you’re best using the correct tool for the job. Sometimes one OS, GPU or processor is a better fit for the job than another.
When it comes to wireless network architectures, there are two RF models out there. Single channel and multi-channel. This article is where I don the asbestos suit and enter into the area trying to be as objective as possible.
Either way, the multi-channel architecture is the one you’re most likely familiar with. The area you need to cover is broken into a number of different cells, one per AP. The number and transmission power of the cells depends on the coverage and density you’re planning for. Generally, for a higher density design, you’ll see more APs, within reason.
Each cell is on a different channel to its neighbours. The main reason for doing this is that every client wishing to use the wireless network on a given area and channel will need to share airtime – only one can transmit at a time. By spreading clients among channels, the chance of collisions reduces. The more spectrum you can do this with, the lower your contention ratio should be.
On 2.4GHz, you’ve got three, non-overlapping 20MHz channels to play with. The non-overlapping bit is rather important, you don’t want neighbouring cells generating adjacent channel interference.
With only three channels to play with, it’s not long before you’re re-using channels. Hopefully without overlap, but even in that case, you will encounter co-channel interference. That has the effect of raising the noise floor, dropping the signal to noise ratio, dropping throughput and increasing both transmission time and collisions.
While things are better on 5GHz, there are a few gotchas regarding the extra spectrum you have available. What’s called UNII-3 spectrum in the standards is actually licensed in the UK. That effectively writes off a number of usable channels.
At the bottom end of 5GHz, the first four 20MHz channels can be used with similar caveats to the 2.4GHz spectrum. Anything after that (the largest chunk of freely available spectrum) requires the use of DFS. Not the furniture store but Dynamic Frequency Selection, which acts as a RADAR avoidance feature.
While there is a possibility you may encounter the odd rogue DFS event that will cause an AP to change channel or at least go quiet, I’d recommend the use of DFS channels anywhere you’re not constantly being painted by RADAR.
On that note, let’s go back to the original idea of this multi-channel arrangement – clients will be spread across the different channels as they move physically through the area in question. This means clients will need to roam between APs, which can be sped up a bit through features such as OKC.
In a high density environment, you can operate tight, low powered cells to spread the load among APs. Remember, it’s a trade-off between co-channel interference driving up the noise floor and number of clients on a channel burning airtime.
The more clients on-channel, the less time each client can have. This is impacted even more when low data rates are in play. It takes longer for the client in question to transmit the data. That get even worse when you consider that beacons are transmitted at the lowest basic rate, burning even more airtime. If you ever needed a reason to kill off those old data rates, admittedly at the cost of some perceived coverage, there’s a reason.
(On a side note, perceived coverage is a problem we see in FM broadcast. Cranking the audio processing can increase the perceived coverage area, at the cost of audio quality or even stereo service in some instances).
Taking that into account, you can see why more, smaller cells providing more spectrum to clients can be a good thing. It’s only possible to take it so far though, as you still have the co-channel interference problem to worry about.
Compare this approach to a single channel architecture. Here every AP is transmitting on the same channel, with the same BSSID. The client has no knowledge that they are roaming between APs as they move around the area.
The cleverness in this comes in the scheduling of clients on channel. When it comes to getting a chance to transmit, there are some gaps between frames they can’t use which exist to protect the previously transmitted frame. Once these are out of the way, the contention window comes into play. Clients are to choose a random offset of time in this window as a point at which to start transmitting. If another radio jumps in ahead of you, they get the slot to transmit their frame. For every slot missed, the radio should be dropping its random number until it approaches zero.
A bit of clever manipulation of the window size by the AP can result in a lot of radios sharing airtime effectively. This is enhanced further by the multiple APs that are all claiming to be the same BSSID. Clients in different physical locations, separated enough that they can be received clearly on different APs gives you the ability to have multiple clients transmitting at once.
A further benefit to clients is the ability to move around a space without needing to roam between APs. Every AP is operating in such a way it pretends to be one.
This means the RF design element varies a bit from your usual small cells on different channels. So long as you got the spacing about right and had the radios to cope with the expected density, everything should be good.
“Should”, it’s a good word. I’ve seen real world implementations where the power is cranked up to gain coverage. Combined with allowing 1 Mbps data rates, it resulted in clients outside the building latching on and burning airtime. In high density locations, you could see incredibly high re-transmit rates and poor throughput.
Don’t get me wrong, there is a place where single channel architectures shine – VOIP in office spaces. The lack of roaming from the client means that there’s less chance of a call being dropped. OKC and friends are making this less of an issue in multi-channel architectures.
Once the density starts cranking up to hundreds of clients in a small space, you start to run into real problems with clients fighting for airtime. Even with multiple APs on the same channel covering an area, you don’t necessarily get the separation needed to be able to receive the frames arriving at the same time clearly.
In this scenario, vendors recommend using “channel layering” or as you might otherwise call it, a multi-channel architecture. This makes sense that after a certain point, you just need as much spectrum as possible to share load across. But it also feels like the key benefits of single channel architectures has disappeared when doing this.
Sadly, real world experience tells me this doesn’t always work. Clients will often latch onto a single channel and have no incentive to ever roam. The RSSI never drops and the client will just keep dropping the data rate until it gets the re-transmit count down. If you’re unlucky, most of your clients will have settled on one channel in a small area, resulting in a high collision rate and a poor experience for all.
802.11k should be able to assist with this. However, it relies on client support, which can be spotty a best.
It gets even worse if you’re transitioning between a single channel deployment to a multi-channel deployment. The high TX powers used will result in clients latching on to the single channel system and having no incentive to roam onto the multi-channel system as they move between areas.
On a more positive note, it’s very easily to deliver incredible client throughput rates in a single channel architecture. You can use 80 MHz and even 160 MHz channel widths, combined with high QAM levels and without worrying about channel re-use. You do need no neighbours anywhere near you to pull this off in the real world though. If they appear on any of your secondary channels, one of you is going to have a poor experience.
While there’s a lot of issues in the single channel system, it’s not all plain sailing in the multi-channel world either. As you’ve probably picked up by now, we’re transitioning from a single channel model to a multi-channel model. One of the key factors I’ve put in the design phase is to build for density first. At a low level, that means 20 MHz channels over 40 or 80, resulting in a lower top throughput rate, but more spectrum available to spread clients across.
We’re also clamping transmission power and relocating APs. In the single channel deployment, APs were often placed in corridors with a clear view of each other. Not an issue as transmission power was cranked up. Doing the same with a system operating automated radio management will result in shrunken cells and a poor experience in office spaces.
While all the planning, modelling and surveying is taking time and slowing things down, it’s proving to be worth the effort. User feedback has improved considerably in the buildings we’ve completed so far. That doesn’t take into account that the back-end is being re-engineered as well to be more robust, reliable and flexible. Changing the RADIUS servers has done wonders for the support side of things and reliability in off-site eduroam authentications. There’s more to come with firewalls, backhaul and monitoring/analysis.
We’ve still got a way to go with the back-end but the change so far, simply moving to a well planned multi-channel architecture with a knowledgable team behind it has done wonders so far. Even if it has involved buildings that had been constructed less than a year ago and deployed with the single channel system.
One thing I don’t want you to come away from this article thinking is that there’s no place for single channel architectures. In the right scenario (relatively low density, possibly VOIP in an office space), it’ll do the job. For a large campus network seeing over 30k clients on an average day, we need all the spectrum we can get to keep those clients talking.