Optimizing 5G at Qualcomm with Joseph Soriaga

1024 683 The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Getting to Know Joseph Soriaga

Before the deep learning revolution began, Joseph was studying computing information theory, working to use belief propagation to achieve Shannon capacity in wireless networks, which basically means he was trying to optimize the way information was transmitted and received. 

“The trick was trying to find the right graphical model for the particular problem to achieve [optimal channel capacity].”

At the time, the information theory community was just beginning to embrace experiment-driven research. As methods like belief propagation were difficult to prove mathematically, experimental methods came into favor as a way to demonstrate Shannon capacity at work.

After finishing his PhD at UC San Diego, Joseph joined Qualcomm R&D, moving into industrial research. Over time, Joseph transitioned from wireless research toward more AI work, figuring out ways to integrate AI into new technologies like 5G. 

Shannon Capacity 

Shannon capacity attempts to characterize how well a particular channel can convey information from a transmitter to a receiver. Essentially, Shannon capacity explains the theoretical bandwidth limit for any given communication channel between two devices so that practitioners know the potential for a given channel and how hard they should work to optimize it.

As channels become more complicated to characterize — due to factors such as noise, motion, or reflections— these principles become more difficult to evaluate. This is where machine learning comes in. ML can help communications systems designers identify the best parameters for optimal channel performance.

Neural Augmentation

Within the world of wireless, there’s a long history of abstracting problems so that they can be perfectly solved mathematically. In order to produce clean mathematical results to publish in their papers, academics often have to simplify the real world and ignore important factors. These factors become really important when it comes to real world deployment. 

An approach being developed at Qualcomm that Joseph is excited about is called neural augmentation. Neural augmentation involves using neural networks to tweak one of these simpler academic algorithms to fit a given real-world problem rather than trying to build a new, more complex model from scratch. In the case of communications systems this allows designers to adapt previous domain knowledge for wider-bandwidth or higher frequency channels with more nonlinearities. 

The team’s recent paper suggests that deep learning can be used to map the simple models to what’s seen in the real world, and ultimately make them more predictive.

Doppler Shift for Device Tracking 

While many people remember the Doppler effect from sirens in the street, or high school physics class, this is a concept used frequently in communication. As a communication wave is distorted in its trajectory, it allows you to track movement of a signal.

Typically, Doppler is used to optimize communication between a fixed base station and a moving communicator, like a cell phone moving in a car.

How Cell Service Works

In essence, cell service is the sending and decoding of signals. After you send your communication signal, like a text from your phone, the message that you want to convey goes through a sophisticated remapping of bits into a constellation that is transmitted across antennas in order to get to a receiver (often a cell tower). If the receiver receives the constellation and can pinpoint where it all came from, it can decode the original set of points that conveyed the message and transmit them to their final destination. We always want to maximize communication capacity so the greatest number of messages can be transmitted and received, so receivers try to optimize their channel for the ideal amount of information. 

Most people imagine cell coverage as a straight line between a cell tower and a mobile device. However, this isn’t the case – wireless signals actually reflect off of surfaces which make them bounce all over the place. This means there’s a ton of noise between the multiple signals or channels between the cell tower and the device all going in different directions. 

Teams use information inferred from Doppler to statistically predict what was sent to and from the device. This helps receivers maximize bandwidth and transmit the optimal number of messages simultaneously. 

Neural Augmented Kalman Filter

One way to take advantage of channel information like Doppler to optimize channel capacity is to use a Kalman filter. Kalman filters take as input a series of measurements observed over time — such as noise and Doppler — to predict estimates of channel state information. However, this doesn’t scale very well, because each instance has to be individually trained and the system has to figure out what kind of bins it can cover.

Kalman filters were the theoretical academic model that was developed to model simple communications channels. The approach Joseph’s team is exploring uses neural networks to augment Kalman filters, training a model on time series data from previous channel estimates and having it work as a universal approximator to predict the channel’s characteristics.

Essentially, an RNN takes previous channel observations and determines what kind of Kalman filter should be used, out of a continuum of parameters for the Kalman filter. They call this a “Neural Augmented Kalman Filter”, or a hypernetwork Kalman Filter. 

One big pro of this model is that it’s end-to-end trainable: all you have to do is expose it to data. Performance-wise, it takes out the coarseness of binning and replaces it with a smooth interpolation of Kalman filter parameters. Compared to a standalone LSTM model without the embedded Kalman filters, this hypernetwork Kalman Filter actually addressed unseen issues much better. 

“We know a lot about the physical world and its behavior, and we’ve developed these models that are representative. So how do we couple statistical approaches and physics-based approaches so we don’t throw the baby out with the bath water?” 

Ultimately, Joseph doesn’t think the Kalman filter is going to solve every problem, but he really espouses the philosophy behind neural augmentation, building up domain knowledge and tweaking the mismatch until it fits. Joseph emphasized the importance of giving models the right amount of flexibility to learn how to adapt, but not too much that it might learn the wrong things and adapt the solution the wrong way.

Environmental Radio Frequency (RF) Sensing

So far, we’ve discussed using ML to help us overcome factors like noise in communications channels, but another interesting application Joseph is working on seeks to use this noise to infer information about what is happening in the communication environment. This application uses channel measurements like noise and Doppler in a wifi environment to predict the presence and motion of people in a building, or infer gestures like waving in front of a phone.

In order to scale this technology, Joseph and the teams he works with are working on building unsupervised and low-labeling models that can teach themselves to identify positioning based on RF signals, which they call WiCluster

Challenges pop up in the difference between latent space and real world distance, as well as how time change can modify the activity inference.

“A floor plan may not look like a floor plan, even though you are moving around. Sometimes you may be close in distance, but not necessarily close in time, because you may have walked in a circle.”

Joseph’s team used a combination of triplet loss and deep clustering algorithms for unsupervised learning. They also tried a weaker “zone labeling” technique, which was quite effective. The system eventually got within 1-2 meters of accuracy, the same level supervised programs were achieving.

The team is combining these models to build neural networks that can infer where you are from RF sensing; there’s a video here to see what that looks like.

Making Wireless Better

Joseph foresees AI enabling the more effective delivery of network services like 5G and more sophisticated devices. 

By allowing communications systems to better understand the environment in which they operate, machine learning allows them to become more efficient and to perform better.

“We’re really excited about seeing this joint interaction between AI and sensing help [improve] communication.”

To hear more about how AI is changing telecommunication, you can listen to the full episode here!

Leave a Reply

Your email address will not be published.