Smart Grid traffic model

For the study in our paper “What Can Wireless Cellular Technologies Do about the Upcoming Smart Metering Traffic?“, published September 2015 in the IEEE Communications Magazine, we derived a smart grid traffic model. This traffic model was based on the Open Smart Grid User Group specifications and some assumptions on system deployment parameters, as we described in the manuscript. Since there has been an interest in the derived model, we have decided to make it publicly available on our webpage.

The file can be downloaded here: traffic_model_shared.xlsx

With this, you can generate scenarios with different ratios of residential and industrial smart meters. For example, if you want to simulate a scenario with 80% residential and  20% industry, you should let 80% of your traffic generators use the traffic of the 40 flows in category 1 and 20% of the 40 flows in category 2. Also, you can scale the number of traffic generators for WAMS, which is category 3.

Please cite our magazine paper if you use the traffic model in any of your work.

Understanding 5G through Concepts from Finance

Petar Popovski

5G for the Masses and the Ultras

The brand and the hype of 5G wireless systems brings unprecedented level of hope from the next-generation wireless technology. This is because 5G shows a clear ambition to not only accelerate the data speeds of the previous wireless generation, but profoundly shake a large number of vertical  industries, such as energy, transport, health, industrial production. Here the term “vertical” comes from the representation in which the digital applications in, say, health, are built vertically on top of an underlying communication infrastructure and technology.

Two attributes are repeatedly mentioned in any popular, research, business, or standardization article about 5G: massive and ultra. There will be a massive number of devices and things that are connected wirelessly, making an Internet of Things. Some of the base stations in the infrastructure will have a massive number of antennas, e.g. hundreds of them, in order to offer reliable high-speed wireless connection to users in a crowd. Even more, massive number of antennas will be a must to be able to use the large chunk of spectrum in the mmWave and offer huge data  rates to the users.

And then there are many ultra-s in 5G. Instead of using massive number of antennas, one can opt for deployment of numerous base stations and access points, known as ultra-dense deployment. This is the case, for example, for crowded users at stadium (pun with “ultras” intended). 5G will also feature ultra-reliable connections, which deliver almost 100% of the packets and do that with a low latency; an example of this type of connection could be a car that connects to the roadside infrastructure and offers early warning to pedestrians and cycles.

5Gservices.png

The three generic services

Despite the large variety of devices and connections, there is a common consensus that 5G will consist of three generic services: enhanced Mobile Broadband (eMBB), massive Machine-Type Communication (mMTC) and Ultra-Reliable Low Latency Communications (URLLC). In a nutshell, these three services can be described as follows:

  • eMBB is focused on offering very large data rates to the mobile users that can be used, for example, for live video streaming from the user or for Virtual Reality Gaming. Furthermore, eMBB should stabilize the connections of all users in the geographical region of interest (e.g. a city) and guarantee a minimal data rate (50 Mbps is often mentioned) everywhere. Here I do not mean “everywhere” as in 3G or 4G, but really everywhere and any time.
  • mMTC is a service for a massive number of Internet of Things (IoT) devices. Each device is only sporadically active and very often has a tiny amount of data to send. The challenge here is how to deal with the sheer number of different connections, where the digital cost of managing the connection may be larger than the digital payload.
  • URLLC refers to low-latency transmissions of small payloads among a relatively small set of communication nodes, used in mission-critical applications, such as industrial automation or remote interaction with a critical infrastructure. Here the challenge is how to guarantee very, very high reliability while meeting some short deadline.

To understand the technical challenge of each of the generic services, let me use a metaphor from finance. The metaphor is necessarily impaired by my very limited knowledge of the area of finance, but I believe that it illustrates the problems and tradeoffs in an accessible way.

One can compare eMBB to a mortgage for a house: a large amount of money is lent over a long period; the money is sufficiently large to justify administrative costs and customization of the contracts, the long period over which money is lent can absorb statistical fluctuations at the side of the borrower (e.g. temporary job loss).

On the other hand, mMTC is reminiscent of microcredits: lending small amounts to a large number of people and the amount does not justify the same administrative costs as for the large loan. However, since there is a small amount of money per borrower, the lender does not need to take very high precautions for each of the borrowers: the large number of borrowers helps to even out the statistical fluctuations for the lender.

Now comes the least known part, URLLC. As the money has to have a very high reliability, we can think of comparison with a government bond. Here the reliability comes from the large underlying mechanisms behind the government, state, administration, etc. And, indeed, this has been the traditional way to achieve ultra-reliable communication for e.g. military purpose: allocate an exclusive radio spectrum and use the military, as well as the administration, to monitor that nobody else is unlawfully using that spectrum. However, this is not possible for URLLC, as there will be variety of applications and stakeholders, such that allocation of an exclusive spectrum is impossible. Besides reliability, URLLC requires low latency, which would correspond to requirements for instant liquidity of the financial instrument. Furthermore, low latency means that one does not have the time at its side (as in eMBB) to even out the statistical fluctuations. And in URLLC every packet matters and the number of users is much lower than mMTC, such that one cannot benefit from the statistical averaging over a large population, as in mMTC. Hence, URLLC should rely on diversity in order to protect the data packets from statistical fluctuations; in finance, this would mean that the loan is diversified and consists of bonds, stocks, gold, etc. One more thing about the reliability and the liquidity of the money. If the required latency is very low and reliability very high, not only that the liquidity of the financial instrument should be instantaneous, but also the infrastructure that offers access and makes it possible to get the cash when needed should have a very high reliability and low latency. This means, for example, that the corresponding database, communication network and bank advisor should be accessible with 100% availability. From this metaphor it is clear that the technology for URLLC brings immense challenges and the research work on this topic is starting to gain momentum.

Mixing everything together

Besides the individual challenges of eMBB, mMTC and URLLC, the grand challenge is how to integrate all of them on the same wireless infrastructure and spectrum. In terms of finance, this would mean having a financing institution in which the same employees and infrastructure are used to offer and manage mortgage loans, microcredits and super-liquid very reliable loans for special customers. The researchers and engineers have started to approach this problem by resorting to numerology. No, not the numerology in an occult and mystical sense, but a scalable numerology: a set of numbers that characterize time durations and frequency chunks used in the 5G systems that will facilitate easier multiplexing of the three generic services. Perhaps the 5G researchers can try to look for some further design inspiration in the finance sector.

Shannon’s capacity of a communication channel made simple

In 2016 we are marking 100 years since the birth of Claude E. Shannon, the man who in 1948 established the area of information theory, which is both a methodology and inspiration in various scientific and technological endeavors. One of his main contributions is the concept of a capacity of a communication channel. This is not the easiest concept around and the scientific literature has witnessed many instances of misunderstanding of the operational notion of channel capacity. This year I am teaching part of an introductory course in information theory and I tried to make a very simple example, with minimal mathematical content and intentionally not related to electronic communication in order to stress the (much) wider significance of Shannon’s theory.

Think of a pre-telecommunication era and of an emperor that wants to send a message to his general through messengers. Each messenger needs to use a horse to traverse the enemy territory in order to get to the general. However, there is a chance, denoted by p, that a messenger gets captured by the enemy. For example, let p=0.1 i.e. the chance of capturing the messenger is 10%. This statistical figure means that, if the emperor sends a large number of, say N=1,000,000 of messengers, then most probably only 90% of them will arrive to the general, or in absolute terms, most probably around 900,000 will traverse the enemy territory successfully. Which strategy should the emperor use to increase the probability that his messages will get through to the general, although any messenger can potentially be captured by the enemy?

Unlike many of today’s politicians, the emperor of our example is smart and does not want to write the message on a paper, since any of the captured messengers will reveal the whole message to the enemy. So, the emperor uses another strategy. Suppose that each messenger can remember a sequence of e.g. M=7 letters. The emperor wants each messenger to remember 7 letters of the message, such that the general can reconstruct the message from all messengers that manage to cross the enemy territory. We note that, in order to reconstruct the message, the general needs to put the letters of the messengers in the correct order. Therefore, each messenger also knows his ordinal number: the first group of 7 letters are from messenger #1, the second group of 7 letters are from messenger #2, etc. On the other hand, the enemy cannot reconstruct the message by capturing a single, or even several messengers – and since the message is not written on a paper, the captured messenger can always lie about the 7 letters that s/he was supposed to remember. It is noted that here we are not dealing with the problem of secure communication, i.e. protecting the data from being acquired by the enemy (although that was also formalized mathematically by C. E. Shannon in 1949). Instead, we are dealing with the problem of reliable communication, i.e. how to ensure that the general gets the message of the emperor with very high probability.

The trouble that the emperor has is that he does not know in advance which one of the messengers will be captured. However, he knows the law of large numbers: random fluctuations start to even out when one observes a large population of messengers. This is the same law that says that, if you roll a die 6,000 times, then most likely you will observe around 1,000 1s, around 1,000 2s, etc. The emperor knows that, if the chance that a messenger is captured is 10% and if he sends N=10,000 messengers, then around 9,000 will arrive to the general. Since each messenger carries 7 letters, the general will receive 7*9,000=63,000 letters from which he can try to reconstruct the message. However, we repeat here that the emperor does not know in advance which messengers will arrive to the general. Hence, the general should be able to reconstruct the message if the 9,000 messengers that are actually arriving are #1, #3, #6, #7, #8, #10, …, but he should be able to reconstruct the same message if the arriving messengers are #2, #3, #4, #7, #8, … or, in fact, any combination of 9,000 out of 10,000 messengers.

We now come to the main point. Shannon has proved that there exists a way for the emperor to encode 63,000 letters into a large number of 70,000 letters and tell 7 of those letters to each of the 10,000 messengers, such that if each messenger has a chance of 90% (or more) to pass through the enemy territory without being captured, then with very high probability, the general can reconstruct the original message of 63,000 letters.  Shannon did not provide a way to actually code the 63,000 letters into 70,000 letters, but made a mathematically powerful statement that a code with that capability must exist.

Finally, the capacity of a channel. If 63,000 letters arrive at the general after being coded and sent through 10,000 messengers, then the information transfer is 6.3 letters per messenger. In our example this is the capacity of the channel and stands for the maximal amount of information transfer per single carrier of information, i.e. messenger. If  10,000 messengers are used and the original message has 63,000 letters or less, then the general will reconstruct the message almost surely. On the other hand, communication above the channel capacity is unreliable: if the number of messengers is kept to 10,000, but the original message has more than 63,000 letters, then the general will almost certainly not be able to reconstruct the message.

At the first glance this is counterintuitive, as we tend to look at one messenger and think of him/her as an unreliable carrier of information. But Shannon showed that it is possible to send a message reliably by using many unreliable carriers of information, as long as the number of messages is lower than what is dictated by the channel capacity.

Information-theoretic purists may be horrified by the brutal lack of rigor in this example, but I hope it introduces the basic idea. And I think that the ingenious work of Shannon deserves to reach to a broader audience.

Special Session on Ultra-Reliable and Mission Critical Communication

We have organized a special session on Ultra-Reliable and Mission Critical Communications, together with Frank Schaich (Nokia), Berna Sayrac (Orange) and Salah Eddine Elayoubi (Orange). This special session was created within the context of the H2020-5GPPP project FANTASTIC-5G and it was held at the 2016 edition of the European Conference on Networks and Communications (EUCNC) in Athens, Greece.

We had four presentations, which covered four essential aspects of Ultra-Reliable and Mission Critical Communications. We would like to thank the presenters and attendees for the very interesting discussions, from which we could see that this topic is extremely relevant for both the industry and academy and will have an impact on 5G systems.

Petar and Nuno


These presentations where:

Presentation: Security on a 5G setting”, by Gerhard Wunder (FU-Berlin)

Abstract: MCC requirements such as high reliability, low latency etc. affect also security (and safety) procedures. In this talk we highlight some challenges of MCC from a 5G security perspective and discuss physical layer security (PHYSEC) as a potential remedy. To counter both passive eavesdropper and active radio hacking systems, that operate at the radio interface of wireless networks, to enable efficient, scalable key pre-distribution and authentication, and to enable much faster key establishment / authentication / attack detection procedures, PHYSEC has emerged as a promising approach, in complement of classical ciphering. PHYSEC strengthens the security of wireless communications by catching and exploiting the intrinsic randomness of the radio propagation, which avoids the use of pre-shared keys and guarantees full secrecy independently of the adverse computing capabilities. In this context we discuss several interesting new “fast” security procedures on radio level such as secret key generation “on the fly”, secrecy coding, secure pairing, etc.

Presentation: “Ultra-Reliable and Low-Latency 5G Communication”, Osman Yilmaz ( Ericsson), Manuscript

Abstract: Machine-to-machine communication, M2M, will make up a large portion of the new types of services and use cases that the fifth generation (5G) systems will address. On the one hand, 5G will connect a large number of low-cost and low-energy devices in the context of the Internet of things; on the other hand it will enable critical machine type communication use cases, such as smart factory, automotive, energy, and e-health – which require communication with very high reliability and availability, as well as very low end-to-end latency. In this paper, we will discuss the requirements, enablers and challenges to support these emerging mission-critical 5G use cases.

Presentation: “Code Design for Short Blocks: A Survey”, Gianluigi Liva (DLR), Manuscript

Abstract: “The design of block codes for short information blocks (e.g., a thousand or less information bits) is an open research problem which is gaining relevance thanks to emerging applications in wireless communication networks. In this work, we review some of the most recent code constructions targeting the short block regime, and we compare then with both finite length performance bounds and classical error correction coding schemes. We will see how it is possible to effectively approach the theoretical bounds, with different performance vs. decoding complexity trade-offs.”

Presentation: “Achieving low-latency communication in future wireless networks: the 5G NORMA approach”, Alessandro Colazzo (AZCOM), Manuscript

Abstract: “The end-to-end network latency is generally considered by the 5G community a key requirement for future wireless networks, enabling new applications by means of end-to-end figures up to a few ms, which is a target that cannot be achieved by the current 4G technology. 5G Novel Radio Multiservice adaptive network Architecture (5G NORMA) project aims at providing a new network architecture design able to cope with the diverse and stringent 5G KPIs, including network latency. This paper describes the low latency issue from a network architecture perspective, starting from the 3GPP state-of-the-art and then describing the 5G NORMA novelties.”

Workshop “Communication theory for 5G wireless systems”

Recently we hosted the workshop “Communication theory for 5G wireless systems”. The aim of the workshop was to present some novel research approaches and trends related to the challenges of 5G.
The list of speakers and their presentations can be find at this link:
http://massm2m.es.aau.dk/2016/06/01/5g_workshop/

 

We extend a sincere thanks to the guest speakers and all who attended the event!

 

IMG_2395IMG_2385

Two awards received in IEEE ICC

Yesterday, in the 2016 IEEE International Conference on Communications (ICC) in Kuala Lumpur, Prof. Petar Popovski from our group received the best paper award of IEEE Communications Magazine, as a co-author of the paper: “Five disruptive technology directions for 5G”.

Also, during the conference, Prof. Petar Popovski has been officially awarded with the IEEE fellowship, as a recognition of his many contributions to the research community.

petar_icc_award

Invited talk at FABULOUS conference in Ohrid

Two weeks I attended the FABULOUS conference in Ohrid, Macedonia, where I gave an invited talk with the title “How Suitable are Cellular Networks for Connecting Future Electricity Smart Meters?”. In this talk I presented the main findings of our just published paper that appeared in the IEEE Communications Magazine in the September 2015 issue. The conference featured several high-quality keynotes in different areas that gave food for thought in the areas that I am already familiar with and provided insights into the concepts and challenges in other areas such as DataFlow supercomputing. Despite the slightly challenging travel to Ohrid, it was an interesting conference location due to the beautiful nature, excellent food, and interesting old town that is considered to be the cradle of literacy.

Keynote on Massive and Ultra-Reliable Access at IEEE ICC Workshops

I have given a keynote speech at the IEEE MASSAP Workshop at ICC 2015 in London. The talk is on wireless massive and ultra-reliable communications, which are seen as two new modes that will be featured in 5G. There is, of course, a technical part in the talk, but there is also a part which argues why the research on wireless and communication theory is still vital. The slides can be found here:

Wireless Lowband Communications: Massive and Ultra-Reliable Access

The airlines need a new paradigm and it will be brought by wireless connectivity

A couple of days ago I was interviewed regarding a research project related to the fundamental communication engineering principles and algorithms for the next, 5th generation (5G) of cellular networks (5G). So what does this have to do with airlines? In the interview I am using an airplane to explain the concept of ultra-reliability. Namely, if we have extremely reliable connection ground-to-plane, then in principle we do not need to have a pilot onboard, the flight may be controlled from the ground. And how reliable that connection should be? A simple answer would be  – at least as reliable as the psychological/health condition of the pilot. Unfortunately, the tragedy of Germanwings seems to be a consequence of exactly that cause.

Now I am even more convinced that the current paradigm of airline control should be changed and enable remote influence through extremely reliable wireless connections. Take the case of Germanwings. The pilot could not enter the cockpit, even if he typed in the security code, since from inside the cockpit another safety system was activated, namely the one by which the door is unconditionally closed in order to prevent a terrorist that would force the person outside the door to type the code. Now let us consider the hypothesis that the cockpit lock system has ultra-reliable wireless connection(s) to the ground (or through satellites) that cannot be disabled by the pilots. Then one or more persons on the ground could join the decision process, judge the situation and either let the pilot in the cabin or take control of the airplane (in the latter case, the wireless connection is not only to the cockpit lock, but to the whole control system). Of course, in other conditions the situation can be turned around – think of a case in which the bad guys are conquering the ground control center and they try to interfere with the work of the pilots in order to lead the plane to crash. This could be addressed by a careful design of the decision rules and having multiple diversified connections to different ground centers.

The original reason that lead me to think about ultra-reliable wireless communication with airplanes was the disappearance of MH370. I was writing about it here last year. This is an example of a case in which the communication to the plane is not sufficiently diversified, thereby leading to almost total silence of the plane (except the few satellite pings). But think now of a futuristic airline that is closely followed by a drone (unmanned aircraft system). The drone has has wireless connections to the ground or to the satellites that are independent of the ones from the passenger airline while, one the other hand, it also has a close communication with the aircraft. Hence, the drone is physically independent of the aircraft, thereby decreasing significantly the probability of simultaneous physical damage. On the other hand, the drone has almost full information about the airplane and can help in decision making (e.g. sensor failures in the airline) or, in the case of accident, as with MH370, track the airplane and provide accurate information about its whereabouts.

Indeed, the future airplane does not need to have a solitary flight, as it is the case today. It can have drones associated with it, which can be used to diversify the sensing and the communication. Clearly, most of the local sensor measurements at the airplane will not be correlated with the measurements of the nearby drones, but the diversified input can be quite useful in determining the situation at the macroscopic level (e.g. rapid loss of height). Another interesting use of the drone can be to serve as an “external black box”, logging the events from the airplane collected through a wireless link.

Besides the drones, another emerging technology that has the potential to change the airline industry is represented by the CubeSats or nano-satellites.Their low price and expected large number in the future may represent an infrastructure to which the airplane is always connected, which offers yet another level of diversification in the wireless connectivity for the airplane.

I think that the airline industry and technology should start to consider these wireless-intensive solutions and significantly improve the safety of the future airplane.