SmartSantander City – EAR-IT project

Smart City of the Week: Santander

Prior to the Smart City Event we will weekly highlight an interesting Smart City project.
This week: The City of Santander.
Are you interested in becoming Smart City of the week? Please fill in the form.

Foreword

We will introduce here the SmartSantander project which was also known as an EU FP7 project ( 15 partners, 6M €, Sept2010-Sept2013 www.smartsantander.eu ) and the add-on project FP7 EAR-IT ( 7 partners, 1,8M€ Sept2012-Sept2014 www.ear-it.eu ) on using “sounds” which is still active. These two projects and even more previous projects involved the nice city of Santander in Spain and gave birth of the SmartSantander city which continue to provide and develop innovative and interactive services to its citizens and to its local economy stakeholders.

Introduction

The SmartSantander project [1] aims at the creation of an experimental test facility for the research and experimentation of architectures, key enabling technologies, services and applications for the Internet of Things in the context of a city (the city of Santander located in the north of Spain). The envisioned facility is conceived as an essential instrument to achieve the European leadership on key enabling technologies for IoT, and to provide the European research community with a one-and-only platform of its characteristics, suitable for large scale experimentation and evaluation of IoT concepts under real-life conditions. The project provides a twofold exploitation opportunity. On the one hand, the research community gets benefit from deploying such a unique infrastructure which allows true field experiments. Researcher will be allowed to reserve the required resources within the whole network and for a determined time period in order to run their experiments. On the other hand, different services fitting citizens’ requirements will be deployed. Different from the experiment applications, it will be either the authorities or the service manager/responsible, the ones in charge of determining the cluster of nodes running each service, as well as, the time duration of the aforementioned service. In order to fulfil all the project requirements, SmartSantander architecture relies on existing components from other platforms, complemented with the corresponding additional building blocks to address the specific singularities of the SmartSantander project. Among the aforementioned platforms, it can be indicated: FP7 Integrated Project SENSEI [2], FP7 STREP WISEBED [3], Telco 2.0 Open Platform [4]. Following SmartSantander, new projects are being carried out using its platform to create new services and increase the city liveability of Santander. Among these projects, EAR-IT [5] (Experimenting Acoustics in Real environments using Innovative Test-beds) focus on the use of acoustic sensing to create new outstanding services. This project aims to reveal the untapped value of audio data, which brings the opportunity to explore new solutions based on acoustical sensor networks to support a myriad set of applications of high social and business value. Just to mention an example, the project research consortium has been doing great steps to estimate traffic density by analysing road sound and noise in a real environment.

SmartSantander Architecture: Service/Experimentation duality

As previously commented, the main objective of the SmartSantander project is the provision of a framework to provide service provision and experimentation support over a determined novel architecture [6][7][8] based on a three-tiered network approach: IoT node tier, gateway tier and testbed server tier. The IoT node tier embraces the majority of the devices deployed in the testbed infrastructure. These devices are typically resource-constrained and host a range of sensors and in some cases actuators. Other devices such as mobile phones and devices with reasonable computing power (e.g. mobile devices in vehicles) and communication capabilities, behave as IoT nodes in terms of sensing capabilities and as GW nodes regarding processing and communication capabilities. The GW tier links the IoT devices on the edges of the capillary network to the core network infrastructure, being IoT nodes grouped in clusters depending on a GW, which locally gathers and processes the information retrieved by IoT devices. The GW tier devices are typically more powerful than IoT nodes in terms of memory and processing capabilities, also providing faster and more robust communication interfaces. The server tier provides more powerful computing platforms with high availability and directly connected to the core network. The servers are used to host IoT data repositories and application servers, receiving data from all GW tier nodes.

[one_half]

The three-tiered network approach previously described bases on the architecture shown in figure 1. From the user perspective, three main blocks can be identified: service provision, service experimentation and experimentation at node level. Service provision includes the use cases developed within the SmartSantander project, taking information from the IoT infrastructure and processing it accordingly to offer the corresponding services. Service experimentation refers to the different experiments/services that can be implemented by external users,

[/one_half][one_half_last]


Figure 1: SmartSantander logical architecture and building blocks

[/one_half_last]

utilizing the information provided by the deployed IoT infrastructure. Experimentation at node level [9] implies node reservation, scheduling, management and flashing [10] in order to execute different experiments over a group of nodes, i.e., routing protocol, network coding or data-mining. Service Provision GW receives the data retrieved by the deployed devices, storing them in the USN platform. Node Manager is also fed with this information in order to monitor the available resources, reporting and keeping update the Resource Manager accordingly. Service-Level Experiment Manager (SLEM) allows the service-level experimenters (i.e. those running experiments using data provided by deployed nodes) to access data collected from the services, stored at the USN component. For service providers (i.e. those providing a service with data retrieved by the deployed nodes), data generated by nodes within the network is directly accessed through the USN. The Portal Server represents the access point to the SmartSantander facility for node-level experimenters, through SmartSantander Testbed Runtime module, providing access (SNAA) to the platform, reserve (RS) set of nodes to be running the experiment and act (iWSN) over them, both remotely flashing them with the corresponding code image [5], as well as receiving data associated to the experiment carried out over them. Finally, the GW4EXP allows the access to the nodes in terms of both network management and experimentation at node level.

Santander deployment: Use cases

[one_half]

The SmartSantander testbed [11] is currently composed of around 3000 IEEE 802.15.4 devices, 200 devices including GPS/GPRS capabilities and more than 2000 joint RFID tag/QR code labels deployed both at static locations (streetlamps, facades, bus stops) as well as on-board of public vehicles (buses, taxis). Deployment shown in figure 2, associates to the development of different use cases:

[/one_half][one_half_last]


Figure 2: Use-cases deployment

[/one_half_last]

  • Static Environmental Monitoring: Around 2000 IoT devices installed (mainly at the city centre), at streetlamps and facades. They are composed of different sensors which offer measurements on different environmental parameters, such as temperature, CO, noise and luminosity. Nodes containing IoT devices also includes two independent IEEE 802.15.4 modules, one running the Digimesh protocol (proprietary routing protocol) intended for service provision (environmental measurements) as well as network management data transmission, whilst the other one (that implements a native 802.15.4 interface) associated to data retrieved from experimentation issues.
  • Mobile Environmental Monitoring: In order to extend the aforementioned static environmental monitoring use case, apart from measuring parameters at static points, 150 devices located at public vehicles (buses, taxis and park and garden maintenance vehicles) retrieve environmental parameters associated to determined parts of the city. Modules installed in the vehicles are composed of a local processing unit in charge of sending (through a GPRS interface) the values (geolocated) retrieved by both sensor board and CAN-Bus module. Sensor board measures different environmental parameters, such as, CO, NO2, O3, particulate matters, temperature and humidity, whilst CAN-Bus module takes main parameters associated to the vehicle, retrieved by the CAN-Bus, such as position, altitude, speed, course and odometer. Furthermore, two additional interfaces, using the standards IEEE802.15.4 and IEEE802.11b, have been included in a set of the deployed vehicles. The 802.15.4 has been included in order to carry out experimentation, interacting with aforementioned static devices, the so called vehicle to infrastructure (V2I) communication. The IEEE802.11b interface is intended to experiment with opportunistic communications to increase the measurements frequency by downloading the data when the mobile node approaches a IEEE802.11b hot-spot (e.g. bus depot).
  • Parks and gardens irrigation: Around 50 devices have been deployed in three different green zones of the city, to monitor irrigation-related parameters, such as moisture temperature and humidity, pluviometer, anemometer, solar radiation, pressure and humidity, in order to make irrigation as efficient as possible. In terms of processing and communication issues, these nodes are same to those deployed for static environmental monitoring, implementing two independent IEEE802.15.4 communication interfaces.
  • Outdoor parking area management. Almost 400 parking bays are being monitored using ferromagnetic sensors, buried under the asphalt in the city centre, so as to detect parking availability in different areas. To perform the communication between the sensors and the upper layers in SmartSantander, two different frequency bands have been used: 868 MHz and 2.4 GHz through multi-hop protocols.
  • Guidance to free parking lots: Taking information retrieved by the deployed parking sensors, 10 panels located at the main streets’ intersections have been installed in order to guide drivers towards the available parking lots.
  • Traffic Intensity Monitoring: Around 60 devices located at the main entrances of the city of Santander have been deployed to measure main traffic parameters, such as traffic volumes, road occupancy and vehicle speed.

As it can be derived from the described use cases, all of them are intended to provide a different service, as well as offering the retrieved data for other users, the so called experimentation at service level. On the other hand, static and mobile environmental monitoring and parks and gardens irrigation, also offer the possibility of carrying out experimentation at node level, offering an additional communication interface. Apart from the aforementioned use cases, two citizen-oriented services have been deployed, thus including corresponding applications for Android and IOS operating systems, in order to foster the citizens’ involvement.

[one_half]

  • Augmented Reality: As shown in left side of figure 3, this service includes information about more than 2700 places in the city of Santander, classified in different categories: beaches, parks and gardens, monuments, shops. In order to complement and enrich this service, 2500 RFID tags/QR code labels have been deployed, offering the possibility of “tagging” points of interest (POI) in the city such as touristic POI, shops and public places (parks, squares). In a small scale, the service provides the 

[/one_half][one_half_last]


Figure 3: Augmented reality application

[/one_half_last]

opportunity to distribute information in the urban environment as location based information.

[one_half]

  • Participatory Sensing: As it can be derived from right side of figure 4, in this scenario users utilize their mobile phones to send to the SmartSantander platform and in an anonymous way, physical sensing information, e.g. GPS coordinates, compass, environmental data such as noise, temperature. Users can also subscribe to services such as “the pace of the city”, where they can get alerts for specific types of events currently occurring in the city. Users can themselves also report the occurrence of such events, which will subsequently be propagated to other users that are subscribed to the corresponding types of events.

[/one_half]

[one_half_last]

 
Figure 4: Participatory sensing application.

[/one_half_last]

EAR-IT project: the sounds for smart environment

The deployment of wireless sensor networks in urban environments offers new possibilities towards innovative applications. Currently, cameras, seismic and ultrasonic detectors, inductive loops etc. are in use. Unfortunately, benefits of acoustic sensors remain widely uninvestigated. Within the EU FP7 project EAR-IT (ID 318381), the potential of the acoustic modality for monitoring applications and other innovative applications (eg emergency detection) are studied. By making use of wireless acoustic sensors (low cost, constraint) already available in the test bed in combination with new intelligent, powerful sensors –  the Acoustic Processing Units (APU) – noise type classification, its quantification and assessment even on subjective levels becomes possible. The following chapters present the latest achievements in the EAR-IT use cases “Acoustic Traffic Density Monitoring” and “emergency detection” implemented in the SmartSantander test bed and the recent developments towards an efficient, robust and scalable system. In particular, deterministic acoustic algorithms as well as machine learning based solutions are evaluated by means of already deployed monitoring systems.

Acoustic Sensing Technologies for Wireless Sensor Networks

Wireless sensors of various kind are deployed already, e.g. for measuring light, CO2, humidity, etc., however, the potential benefit of the audio modality remains widely un-investigated although coming with obvious advantages. Audio sensors are cheap, energy efficient and often easy to deploy, do not depend on a line-of-sight (NLOS), allow for omnidirectional sensing and are basically independent from weather conditions and lighting situations. The advantage of NLOS acoustic sensing over video cameras is its quasi-independence from the sensor position, while providing the possibility to see through obstacles and not being limited to a certain viewing angle. This is important, as many sensors may need to be added into the environment in order to have a full coverage of the area of interest whereas one could use much less sensors and less complex sensing solutions if deeper incorporating the audio modality. The acoustic sensors (i.e. microphones) are furthermore multipurpose by definition. They not only capture relevant environmental information (through the sound) and provide physical measures, e.g. loudness or direction of sound, but also allow an identification of specific events within the audio stream if equipped with a reasonable amount of processing power. Hence, once deployed, its intelligent sensing capability on a modular software level together with communication capabilities makes them very interesting devices also within the IoT context.

Deploying APU (Acoustic Processing Unit)

[one_half]

The goal to deploy an advanced audio sensor that is cheap, robust, easy to install and compatible with the test bed while meeting the computational requirements of complex audio signal processing algorithms at the same time led to the development of a new generation of IoT device within EAR-IT, the so called Acoustic Processing Unit (APU). Compared to already existing solutions, the APU comes with increased processing power via the utilization of an embedded

[/one_half][one_half_last]


Figure 5:  APU and deployment in the city

[/one_half_last]

processing platform which is able to process complex algorithms with high quality audio. The APU is equipped with a modular software framework for acoustic event detection based on deterministic and machine learning algorithms, consisting of the following major components:

  1. A pre-processing stage to obtain low-level information about the input signal and to derive suitable signal representations;
  2. An acoustic event detection stage to derive mid-level contextual information about the audio data
  3. A statistical modelling stage to formulate short and long-time high-level semantics for application and service development.

[one_half]


Figure 6: 2 family of applications using sounds and combining APUs and simple IoTs

[/one_half][one_half_last]

The system is highly adaptable due to the modular structure in each stage, fully automated and non-obtrusive. It respects privacy issues and does not store any contextual information at any point of time, which leads to higher end-user acceptance than video surveillance.

We deployed few APUs (red dot below) and the challenge was also to benefit from large deployment of usual IoTs (blue spots) and cooperate with them. We worked on two different applications of acoustic processing research: the events detection ( eg gun shots, shouting for help, car glass broken, siren detection) and traffic monitoring.

[/one_half_last]

To study further we select first 2 use cases , the emergency detection and the traffic density monitoring

[one_half]

Emergency detection experiment

EAR-IT investigates the value of identifying specific acoustic events in an outdoor environment. In particular, emergency vehicle sirens in cities are the focus of this use case. By using the acoustic event detection functionality provided by the APUs to be deployed in the wireless sensor network at a suitable spot, sirens will be identified. Research conducted recently, already showed that machine learning based siren detection is possible for various applications.

[/one_half][one_half_last]

[/one_half_last]

Loudness measures provided by the already deployed acoustic sensors in the wireless sensor network will complement this new type of information and enables localization tracking of the emergency vehicle, i.e. its siren across the urban area. This data can then be fed to a traffic management system to actively steer traffic lights with the goal to reduce the overall reaction time of official authorities in case of an incident.

[one_half]


Figure 7: emergency detection use case

[/one_half][one_half_last]

Traffic density monitoring

By making use of already existing technology for traffic density quantification in the test bed (inductive loops, radar, video, ultrasonic, etc.) as ground truth information, the benefits and success of the APU can evaluated. Therefore, two different kinds of algorithms are under investigation, namely computationally low-cost deterministic approaches for traffic density monitoring and more complex machine learning based algorithms hence leading to alternative computationally scalable solutions for traffic density monitoring.  After the deployment of APUs we already had a good feedback on feasibility and interest of using sounds for traffic monitoring.

[/one_half_last]

The preliminary results prove that the environmental noise based traffic density estimation is possible and the solution can extend the capabilities of the existed traffic monitoring systems.


Figure 8:  example of traffic levels detected by EAR-IT APUs

 

 

 

 

 

Leave a comment

Smart City Event website is van Euroforum BV. Privacy statement | Cookie statement | Copyright ©2019