Choosing LPWAN technology for network capacity

Choosing LPWAN technology for network capacity

An article by Alan Woolhouse, Chair of the Weightless SIG Marketing Working Group. This is the second article of a series of exclusive posts dedicated to the LPWAN technologies and solutions.

Choosing IoT connectivity technology needs careful consideration of multiple technical and commercial factors linked closely to individual use cases. Different applications favour different technologies so what the product is designed to do will be a major factor in the decision. Last time out we took a brisk gander through the seemingly endless technology options in order to establish a framework. If you’re following this series you’ll recall that I grouped the technologies into three categories – LAN/PAN at one end, 3GPP options at the other and an increasingly busy centre ground roughly described as LPWAN. We are seeing the emergence of new players and convergence from incumbents – in particular from the 3GPP stable. LPWAN is going to be the focus of the rest of this series and my goal is going to be provide guidance to help you make decisions.

What matters?

Choosing connectivity is complex – there are a lot of features and benefits, often conflicting, to weigh in the balance. So let’s distil some of the key characteristics that will define an IoT connectivity technology and from there we can more easily make decisions about what is important to our particular use cases. We think that the strength of an IoT connectivity technology can be defined in terms of the following eight parameters.

  • Capacity
  • Quality of Service
  • Range
  • Reliability
  • Battery life
  • Security
  • Cost
  • Proprietary vs Standard

Modulation scheme

capacity keyWhen IoT connectivity technologies are being considered, adopters rightly consider parameters like cost, battery life and range but it’s easy to overlook the importance of network capacity. And in the absence of empirical data from real world network deployments, we can be tempted to rely too heavily on theory rather than properly modelled scenarios ignoring the critical parameters limiting network capacity. Capacity is not just about the number of simultaneously connected nodes, it is about mean data packet length, transmission time, frequency of transmissions and interference mitigation. Scaleability is also about the fundamental technology behind the network from ultra narrowband at one end of the spectrum to wide band at the other. Not surprisingly, both give rise to upsides and downsides and as we will see, the capacity sweet spot is a compromise between the two.

You know the routine now – I’m going to divide the market into groups, in this case based on modulation scheme – ultra narrow band, narrow band and wide band. These are very different approaches to wireless communications and this matters. In terms of capacity it matters a lot with the limited bandwidth and restrictions on transmit power that characterise licence exempt sub-GHz spectrum, specifically the Industrial, Scientific and Medical, ISM, and SRD, bands. Stay with me, this will shortly make sense – first the extreme ends of the spectrum, then the optimum middle ground.

Ultra narrow band: The concept behind ultra narrow band (UNB) is that as the transmission bandwidth is reduced the amount of noise entering the receiver also falls. With a very narrow bandwidth, the noise floor is reduced considerably, resulting in long range with low transmit power – so far so good, that sounds like it ticks two of the important boxes for LPWAN. However, an ultra-narrow channel can only carry low data rates and so UNB systems tend to be associated with small packet-size transmission. Also ultra narrow band cannot readily support bidirectional communications and so tends to be deployed in low end use cases where reliability and quality of service (QoS) are less important.

Wide band: An alternative approach is to adopt a wide channel – often up to 1MHz, or more – and then to use spreading of the data to gain range. This brings flexibility as the spreading factor can be varied depending on the channel conditions and in smaller cells very high data rates can be adopted. However, there are few spectrum bands wide enough to support multiple wide band channels, certainly in licence exempt spectrum, so different terminals and base stations need to share the same spectrum. Their transmissions can be differentiated through use of different spreading factors but only in a tightly time-synchronised and power controlled system with strong central control. This is difficult to achieve for the short and occasional communications made by most IoT terminals and rules out multiple public and private networks.

Narrow band: As in many other situations, the optimum configuration or ‘sweet spot’ is often found between the two extremes and so in terms of quality of service (QoS), capacity, cost and network efficiency, we find that narrow band technologies offer a great compromise. Narrow band channels, around 12.5kHz wide, offer optimal capacity for uplink dominated traffic of moderate sized data payloads from a large number of terminal devices.

Below we detail how each of these parameters might impact on your IoT project. We then list a number of characteristics that show how Weightless technology addresses the requirements in the context of these parameters.

Network capacity for the IoT

Today, the relatively few LPWAN networks in existence are generally not capacity constrained. There are currently few devices connected to them, often in trial mode, where aspects such as range and functionality are being tested. But if the predictions of tens of billions of devices are even remotely correct then this will all change in the next few years as the number of devices grows rapidly. Base stations could have hundreds of thousands of devices connected at any point and just as the iPhone stimulated the data crunch, we could see a “machine crunch” on the emerging networks.

We are used to discussing and defining capacity on cellular networks. This is typically stated in measures such as bits/Hz where a technology is rated according to how much data it can transfer per unit of radio spectrum owned by the operator. Each generation strives for better efficiency through improved radio technology, better scheduling of traffic and so on. When technology is unable to cope, operators deploy a mix of additional spectrum where they can acquire it through auction and through the deployment of smaller cells. The latter has been the key driver of capacity growth over the recent decades, delivering much of the 100-fold or so increase in capacity needed to meet smartphone requirements. Smaller cells means more cells and in turn, more base station hardware. Both the capital and operating expenditure of networks increase significantly as the cell size decreases.

When designing for IoT it is second-nature for engineers to reach into the same toolbox as for cellular. But this is a mistake – there is much about IoT that is very different and many of the same techniques just do not work. Some of the key differences are:

  • Short messages. Many IoT devices send tiny amounts of data. A car park sensor, for example, need only report 1 bit of data indicating whether the parking space is occupied – a car is there or not there. A thermostat might need 8-16 bits. A locating device perhaps as much as 8 bytes. This is typical for IoT – small packets of user data. But these volumes are usually insignificant compared to the overall packet size. For example, if IPV6 addressing is used then an address is 128bits long. A device reporting its identity then its message could increase data volumes 10-fold. Other signalling messages such as location updates will similarly consume network resources – a device in a vehicle that updates its location for network management purposes but only transmits once a day could send 1000X the user data. Designing carefully for short messages could easily improve capacity by an order of magnitude.
  • Random timing. Most cellphone interactions start at a random time – the point when someone wants to call a user, or they decide to instigate a search on a smart phone. The device then goes through a “random access” phase to initiate communications with the network after which the network provides dedicated resource for the duration of its communications or “session”. Random access is great for a user with a mobile phone. For a network it is inefficient. The larger the network, or connections on the network, the higher the probability that multiple users will attempt to access the network resource at the same time and clash. When this happens often all communications are lost. The users then repeat their transmission in order to increase the probability of a successful connection. The efficiency of such channels is well defined in the“Aloha access” theory which tells us at best they achieve around 30%. Above these levels there are so many message collisions and re-transmissions that then collide again multiple times that the channel capacity spirals downwards and a reset is needed. In the cellular world the random access phase is only a tiny fraction of the total data transmitted so its inefficiency is of little relevance. The difference in the size of data transmissions in a typical IoT deployment where all of the data can be encapsulated in the first message means that virtually all transmissions are typically random access. In this case efficiency drops to 1/3 at best. If devices could be told when to transmit next – for example thermostats given periodic slots – then 3X efficiency improvements can be made.
  • Power adjustment. In cellular systems handsets are tightly controlled by the network to use the optimal type of modulation and power levels, with these varying dynamically, often second-by-second. In typical IoT implementations transmissions are so short that there is little time for the network to adjust the device. Hence, the device will typically use higher Tx power than needed resulting in more interference. Networks need to be designed both with clever ways to adjust device power based on knowledge such as whether the device is static (and so transmit power can be steadily adjusted over time) and other cues from the network.
  • Multiple overlapping networks. Cellular operators have their own spectrum and can design networks free of interference from others. Conversely, most IoT networks are deployed in unlicensed spectrum where there can be interference from other IoT networks using the same technology, other IoT networks using different technology and other users. To date, this has not been a key issue but as competition grows and more networks are deployed it could become a constraining factor. Some techniques, such as code-division access (CDMA and similar) rely on orthogonality between users which is only effective where users are controlled in time and power. With a single network this is possible, but with multiple networks there is rarely coordination between them and the impact of interference can be severe. Instead, techniques such as frequency hopping and message acknowledgements are much more important as are networks that can adapt to their interference environment.
  • Flexible channel assignment further enhances network capacity by enabling frequency reuse in large scale deployments and adaptive data rates permit optimal radio resource usage to maximise capacity. Time synchronised base stations allow for radio resource scheduling and utilisation.

For all of these reasons and more, the efficiency of an IoT network should not be measured in the classical manner. A network could have apparently worse modulation but simply through smaller message sizing be 10 times more efficient.

There are many technologies that are sub-optimal and have the potential to suffer severe capacity constraints. For example, UNB technologies will typically resend messages multiple times to increase the probability of successful transmission. This is clearly inefficient and has limited or no ability to take any action once a cell is overloaded. Wide band systems rely on orthogonality between transmissions which could suffer badly when multiple overlapping networks are deployed in the same spectrum. 3GPP solutions are still in definition but often have large minimum packet sizes. Issues that may not become apparent during a trial where network capacity is not stressed may only emerge when tens of thousands of devices are deployed. At this point changing the technology is very expensive.

Narrow band modulation regimes offer a compromise between the benefits of UNB and wide band. They are optimised for uplink dominated traffic of moderate payload sized data packets and moderate duty cycles. A carefully designed narrow band IoT regime is optimised for high network capacity and support for networks necessary to enable the tens of billions of predicted connections. An optimised technology will offer very short message sizes, frequency hopping, adaptable radios, group and multicast messages, minimal use of random access through flexible scheduling and much more. Although its bits/Hz may not be materially different from other solutions, in practical situations it is potentially orders of magnitude more efficient. If IoT devices were like handsets and replaced every two years, that might not matter, but with some being 10 – 20 year deployments, getting it right from the start is critical.

The author: Alan Woolhouse is Chair of the Weightless SIG Marketing Working Group.
The views and opinions expressed in this blog post are solely those of the author and do not necessarily reflect the opinions of IoT Business News.

Related posts