High accuracy 5G-based positioning in an industrial environment
Positioning as a service benefits a plethora of use cases from logistics to factory floor to vulnerable road user protection [1]. Our efforts focus on the industrial indoor environment, where a high demand from vertical industries desires precise localisation of humans and items. In such an environment GPS cannot be used properly. 5G systems are being deployed in more and more factories. Therefore, 5G becomes an excellent basis to have a unified solution for both communication as well as positioning.
Our demo setup for real-time precise positioning with 5G (Rel-16) using Transmission Reception Points (TRP) has been installed in the Arena2036 [2], a collaboration platform for industry, SMEs and academia aiming at the joint development of novel production flows in a fully digital factory. In this environment industrial partners will be building up their own applications based on our accurate positioning system to increase production efficiency and safety. Our solution will be the significant building block for the future factory and enable digital twin solutions.
In our demo video presentation, we will show an industrial standard robot equipped with a commercial 5G device moving on a programmed drive route in the Arena2036 in conjunction with its true and estimated position on a map. For the first time we will demonstrate the positioning performance of a commercially available 5G system, present a statistical evaluation, and discuss the seamless integration of positioning with communication services. With the current implementation we achieve position estimates below 50 cm error for more than 90% of all measurements using standard compliant techniques. Further enhancements are expected by additional algorithmic improvements which are currently being implemented.
[1] S. Saur, M. Mizmizi, J. Otterbach, T. Schlitter, R. Fuchs and S. Mandelli, “5GCAR Demonstration: Vulnerable Road User Protection through Positioning with Synchronized Antenna Signal Processing,”WSA 2020; 24th International ITG Workshop on Smart Antennas, 2020, pp. 1-5.
[2] https://arena2036.de/en/
Demonstration of a D-Band Radio-on-Glass Module for Backaul Applications
The demo will showcase Nokia Bell-Labs D-Band (110-17GHz) Radio-on-Glass modules as well as Phased Array-on-Glass devices. These module represent a world-first in integration and performance as the foundation for next generation point-to-point backhaul communication systems as well as emerging 6G systems.
5G Plug & Produce
The replacement of wired Ethernet by 5G based wireless connections provides significant benefits in Industry 4.0 scenarios. Seamless integration with operational technology (OT) infrastructure (and industry network management) is the fundamental requirement. The demo shows how 3GPP standardized 5G Ethernet enhancements (“plug and produce”) provide simple integration of 5G into industry OT (incl. NMS) and how extended features of Ethernet technology like prioritization can be realized in mixed (layer 2 Ethernet and 5G networks) end-to-end scenarios with 5G networks in-between.
Whitepaper: https://onestore.nokia.com/asset/207281/
Vision-Based Positioning for Digital Twin Creation
Digital Twins are used for various application scenarios, from network planning and optimization over maintenance of machines and buildings to supporting XR services. Within this demo, we present how to create a digital twin by using Vision-Based Positioning (VBP). Using VBP, detailed maps of the environment can be generated and VBP can also be used for very precise positioning. The downside of VBP is that it is demanding in terms of compute resources. That’s why we demonstrate a so-called split-the-chip approach, which allows to dynamically split the VBP processing between the device and an edge cloud connected via a 5G SA network.
NYURay: a 3D mmWave and sub-THz ray tracer
This demonstration shall introduce NYURay, a 3D mmWave and sub-THz ray tracer calibrated to real-world indoor and outdoor measurements at 28, 73, and 142 GHz.
NYURay predicts the temporal and angular information of multipath components arriving at mmWave/sub-THz receivers by providing users with the power delay profile and angular spectrum at multiple locations in a user-specified environment. The ray tracer may be used for mmWave/sub-THz coverage prediction and simulations.
Millimeter-wave and sub-Terahertz channel simulation
NYUSIM is an open-source mmWave and sub-Terahertz channel simulator. The latest version, NYUSIM 3.0, enables channel simulations for the indoor office scenario for the carrier frequency up to 150 GHz. The simulator can produce accurate omnidirectional and directional channel impulse responses, power delay profiles, and 3-dimensional (3-D) power angular spectrum, which can be used in beamforming algorithms and capacity evaluation for 6G and beyond. This demonstration will show how to specify simulation parameters and run NYUSIM to create various channel instances.
Next Generation Channel Sounder System for sub-THz Frequencies
The demonstration shall introduce the current 142 GHz channel sounder used for both indoor and outdoor propagation measurements, and demonstrate the system operations in measuring the multipath channel in the lab environment. Next, the miniaturized baseband evaluation board for a sliding correlation-based channel sounder built on 65 nm CMOS shall be showcased. The performance of the board in resolving multipath components with a 1 ns delay shall be demonstrated while highlighting the future direction in achieving a fully miniaturized channel sounder system. The resurrected RF probe station with the ability to characterize devices within the 140-220 GHz frequency range shall be presented as an important tool for developing phased arrays at sub-THz frequencies.
Machines in Motion Laboratory Tour: Model-predictive control for reactive and robust behaviors
This demonstration provides an overview of the Machines in Motion Laboratory at NYU. The laboratory investigates the algorithmic foundations of autonomous robotic movements. Using optimal control and reinforcement learning, the laboratory designs general algorithms to produce complex and versatile behaviors such as walking, jumping, grasping or object manipulation in unknown environments. This demonstration will present the unique lab robotic experimental infrastructure and our most recent results to create more autonomous and agile robots. We will also discuss our work at NYU Wireless that leverages next generation wireless to increase robot autonomy.
Next generation brain-machine interfaces
At NYU nanolab we are working on the development of neural probes for next generation brain-machine interfaces. This probe may help us advance our understanding of the brain functions and underlying neurological disorders, such as Parkinson disease, schizophrenia, eating disorders and many more. This project has two main components: first, we are developing nano engineered graphitic materials for building scalable and highly sensitive sensors, second, we are developing a low power, high temporal resolution and portable detection CMOS circuit for interfacing with the nano engineered graphitic sensors.
Wearables for the visually Impaired using Mobile edge computing.
Immobility is a fundamental challenge for persons with visual impairment (VI) where limited spatial awareness leads inefficiencies and peril during navigation. In this demo, we present VIS4ION, a Visually Impaired Smart Service System for Spatial Intelligence and Navigation. VIS4ION is a human-in-the-loop, sensing-to-feedback advanced wearable system that supports a host of microservices during VI navigation, both outdoors and indoors. The VIS4ION platform is an instrumented backpack; more specifically, a series of miniaturized sensors are integrated into the support straps and connected to an embedded system for computational analysis; real-time feedback is provided through a binaural bone conduction headset and an option reconfigured waist strap turned haptic interface.
We discuss our research to make the VIS4ION system wirelessly cloud connected to offload computational processing via high bandwidth 5G links. Specifically, we are experimenting with uploading high-resolution multi-camera data to the cloud, which then performs advanced deep learning-based machine vision algorithms for scene processing and navigation.
140 GHz, 8-channel, Fully-Digital, Software Defined Radios
This demo shows how the 140 GHz ASIC and Modules developed at UC Santa Barbara were used to build a multi-channel 140 GHz software-defined radio: a) integration with the Xilinx RFSoC-based FPGA; b) porting of the Calibration code; and c) beamforming and data link demonstrations.
Medical Robotics and Interactive Intelligent Technologies (MERIIT @ NYU) Lab
In this video, we provide an overview of activities at MERIIT @ NYU Lab led by Prof. S. Farokh Atashzar. We will explain our state-of-the-art telerobotic and telemedicine research with a specific focus on health and Medicine. For more information please visit https://engineering.nyu.edu/news/telerobotic-surgery-comes-nyu-tandon.
NYU Tandon nanofabrication facility tour and capabilities
The demo provides background information and a virtual tour of the Nanofabrication Cleanroom Facility at NYU Tandon, which comprises of over 2,000 sq. ft. of class 100 and 1,000 cleanroom space and a host of advanced micro/nano fabrication tools, spanning the areas of lithography, etch, deposition and metrology. The cleanroom is multi-user facility, open to students, scientists and engineers from all institutions and companies.
Advanced high-fidelity channel modelling and methodology
The demonstration will show advanced high-fidelity channel modelling including state-of-the-art visualization technology. The framework combines propagation, ray-tracing tool-sets and 3D visualization framework. The demonstration also includes a discussion on how such a framework can allow for real-time experiments as well as being a stepping stone towards a digital twin of cellular systems.
5G and Edge for autonomous zero-defect manufacturing
The demonstration jointly by InterDigital, Vodafone and Amazon (AWS), delivers an Industry 4.0 near real time zero defect manufacturing solution based on an integration of AWS Wavelength Edge into Vodafone’s 5G network. The demonstration shows the feasibility of 5G and Edge solutions to support the challenging low latency requirements of targeted Industry 4.0 use cases, which cannot be met with existing cloud solutions.
Specifically, the demonstration showcases the detection and disposal of defective products on an assembly line or the remote navigation of a vehicle within a factory. These applications are hosted at the AWS edge due to their low latency requirements and use the Vodafone 5G network to connect to the various terminal devices, like a robot, camera, or LiDAR. The solution provides required high bandwidth connectivity in uplink and low latency connectivity in both uplink and downlink.
Addressing Sub-Terahertz OTA Channel Impairments
Sub-Terahertz (sub-THz) frequencies (100-300 GHz) with extreme modulation bandwidths are part of early 6G research. Keysight’s new sub-THz testbed can perform EVM measurements for extreme modulation bandwidths at D-Band (110-170 GHz) and G-Band (140-220 GHz). Understanding and addressing channel impairments at sub-THz frequencies is also an area of research for 6G. This demo shows Keysight’s 6G testbed receiver being customized for a real-time adaptive equalizer implementation to address sub-THz channel impairments. Adaptive equalization of the sub-THz over-the-air (OTA) channel response will be demonstrated at D-Band.
RF Fingerprinting of LoRaWAN Transmissions
There is an ever-increasing use of low-power wireless sensor technology, for example LoRaWAN [1], for sensing and monitoring applications, thus identifying a need to make this technology robust against cyber attacks. This is one of the use-cases being considered by the UKRI/EPSRC Prosperity Partnership in Secure Wireless Agile Networks (SWAN) [2]. SWAN is addressing radio frequency (RF) based cyber attack detection and mitigation, rather than network orientated intrusion.
Within SWAN we have developed an RF penetration test-bed (pen-test) to facilitate both the injection of jamming waveforms as well as extraction live over-the-air (OTA) waveforms for RF fingerprinting, with LoRa as a first candidate technology. In the virtual demo, we will show a two-pronged methodology for RF finger printing of the start-up chirps from a LoRa modem. Firstly, a self-organising feature map (SOFM) is trained using unsupervised competitive learning of neural network (NN) clusters to produce two-dimensional (2D) discretised representations of the input space. The input space consists of the differential constellation trace of LoRa I/Q samples. These I/Q samples are extracted from LoRa RF transmissions form different LoRa transceiver modules (RN2483 from Microchip); as well as a baseband LoRa waveform that is generated in MATLAB, then up converted and transmitted using a vector signal generator (VSG). Secondly, an optimised deep convolutional neural network (CNN) architecture with batch normalisation at every convolutional layer is proposed. The CNN is trained using labelled datasets compiled from the 2D SOFMs from multiple training epochs of the original NN clusters. The proposed architecture is expected to be invariant medium access control (MAC) ID spoofing. In other words, it learns only physical (PHY) layer features due to the differential constellation trace of LoRa I/Qs without learning the MAC features. Our current work demonstrates cent-percent accuracy. Further research challenges for increasing the robustness of this approach by adding additive white Gaussian noise (AWGN) and channel impairments for the emulation of OTA transmission are discussed.
[1]: https://lora-alliance.org/
[2]: https://www.swan-partnership.ac.uk/
USRP X410 FPGA Accelerated Real Time Spectrum Analyzer
NI’s new X410 USRP has more bandwidth, channels, and FPGA processing power than ever before. This demo will show 4 channels streaming through the FPGA to process the data and display a waterfall chart. The RTSA RF fosphor application is open source and leverages the RFNoC framework for easy integration and replication of the demonstration. We are excited to see what the community can do with a powerful piece of hardware like the X410!