Content area
Big data storage and processing are considered as one of the main applications for cloud computing systems. Furthermore, the development of the Internet of Things (IoT) paradigm has advanced the research on Machine to Machine (M2M) communications and enabled novel tele-monitoring architectures for E-Health applications. However, there is a need for converging current decentralized cloud systems, general software for processing big data and IoT systems. The purpose of this paper is to analyze existing components and methods of securely integrating big data processing with cloud M2M systems based on Remote Telemetry Units (RTUs) and to propose a converged E-Health architecture built on Exalead CloudView, a search based application. Finally, we discuss the main findings of the proposed implementation and future directions.
http://crossmark.crossref.org/dialog/?doi=10.1007/s10916-015-0327-y&domain=pdf
Web End = J Med Syst (2015) 39: 141DOI 10.1007/s10916-015-0327-y
http://crossmark.crossref.org/dialog/?doi=10.1007/s10916-015-0327-y&domain=pdf
Web End = http://crossmark.crossref.org/dialog/?doi=10.1007/s10916-015-0327-y&domain=pdf
Web End = http://crossmark.crossref.org/dialog/?doi=10.1007/s10916-015-0327-y&domain=pdf
Web End = http://crossmark.crossref.org/dialog/?doi=10.1007/s10916-015-0327-y&domain=pdf
Web End = http://crossmark.crossref.org/dialog/?doi=10.1007/s10916-015-0327-y&domain=pdf
Web End = http://crossmark.crossref.org/dialog/?doi=10.1007/s10916-015-0327-y&domain=pdf
Web End = http://crossmark.crossref.org/dialog/?doi=10.1007/s10916-015-0327-y&domain=pdf
Web End = PATIENT FACING SYSTEMS
Big Data, Internet of Things and Cloud Convergence An Architecture for Secure E-Health Applications
George Suciu1 & Victor Suciu1 & Alexandru Martian1 & Razvan Craciunescu1,2 &
Alexandru Vulpe1 & Ioana Marcu1 & Simona Halunga1 & Octavian Fratu1
Received: 25 April 2015 /Accepted: 18 August 2015 /Published online: 7 September 2015 # Springer Science+Business Media New York 2015
Abstract Big data storage and processing are considered as one of the main applications for cloud computing systems. Furthermore, the development of the Internet of Things (IoT) paradigm has advanced the research on Machine to Machine (M2M) communications and enabled novel tele-monitoring architectures for E-Health applications. However, there is a need for converging current decentralized cloud systems, general software for processing big data and IoT systems. The purpose of this paper is to analyze existing components and methods of securely integrating big data processing with cloud M2M systems based on Remote Telemetry
Units (RTUs) and to propose a converged E-Health architecture built on Exalead CloudView, a search based application. Finally, we discuss the main findings of the proposed implementation and future directions.
Keywords Big data . IoT . M2M . Cloud computing . E-health
Introduction
Cloud computing, Big data and Internet of Things (IoT) are popular ICT paradigms and their features can be combined for shaping the next generation of eHealth systems.
Big data in E-Health enables the transformation from hypothesis driven research to data-driven research by processing large volumes of heterogeneous medical data [1]. Furthermore, by using a search based application it is possible to find weak signals in big data [2]. In particular, given the E-Health application, it is possible to leverage trivial and nontrivial connections between different sensor signals and existing big data, in order to find new ways to provide remote diagnostics, better understanding of diseases and development of innovative solutions for therapeutics.
The aggregation and analysis of such weak signals will provide evidence of connections between health problems and environmental related issues faster and better than trivial mining of sensor data. As a consequence, a software that can aggregate and analyze such signals has a significant potential for matching E-Health applications and critical challenges that are related in non-obvious ways in IoT scenarios.
Subsequently, the IoT paradigm relies on the identification and use of a large number of heterogeneous physical and virtual objects, which are connected to the Internet [3]. IoT enables the communication between different objects, as well
This article is part of the Topical Collection on Patient Facing Systems
* George Suciu [email protected]
Victor Suciu [email protected]
Alexandru Martian [email protected]
Razvan Craciunescu [email protected]; [email protected]
Alexandru Vulpe [email protected]
Ioana Marcu [email protected]
Simona Halunga [email protected]
Octavian Fratu [email protected]
1 Faculty of Electronics, Telecommunications and IT,Telecommunication Department, University POLITEHNICA of Bucharest, 060071 Bucharest-6, Romania
2 Center for TeleInfrastruktur (CTIF), Aalborg University, Aalborg, Denmark
141 Page 2 of 8 J Med Syst (2015) 39: 141
as the in-context invocation of their capabilities (services) towards added-value applications. Early IoT applications are based on RFID (Radio Frequency Identification) and Wireless Sensor Network (WSN) technologies and deliver tangible benefits in several areas including manufacturing, logistics, trade, retail, green/sustainable applications, as well as other sectors.
In the case of M2M, important research initiatives propose new reference models, defining standards and new communication architectures, with different approaches for solving security, reliability and energy-efficiency problems. How-ever, many efforts are focused on largely distributed critical infrastructures and just a few initiatives are dedicated to the definition of platforms for deployment and execution of new M2M applications, based on new generations of WSN and innovative sensor mining methods that can be deployed according to a cloud computing model [4].
Indeed, M2M applications are envisioned to be hosted within cloud computing systems and contribute to the convergence of interactions between the real and virtual world, thus accomplishing improvements in industrial productivity and quality of life for citizens, based on secure and dependable automation of sensing and actuating tasks.
However, we argue that this kind of approach is suboptimal for M2M because of its inherent paradigms: M2M applications are highly autonomous, implement simple and repetitive interactions in highly constrained environments, with limited scope in time and space, and must respond in a reliable manner to mission critical QoS needs. On the other hand, usual cloud applications require that big volumes of data are redundantly stored and provided through content delivery networks, to be available for very long periods and accessible as ubiquitously as possible for the use by human begins. Also, we discuss the applicability for other information sources and future use case scenarios.
At the same time, the cloud computing paradigm [5] realizes and promotes the delivery of hardware and software resources over the Internet and according to an on-demand utility based model. These services hold to promise to deliver increased reliability, security, high availability and improved QoS at an overall lower total cost of ownership (TCO).
Another issue that is of high importance is how far (deep inside the network) a specific device from the cloud is or what is the latency if one uses a cloud infrastructure. Hence, if the number of devices in a network increases, also the cloud computation effort increases and the network will introduce a larger latency because the cloud servers are overloaded [6].
Due to the above arguments recent research efforts have focused on offloading some of the tasks (e.g. storage or processing tasks) usually done in the cloud, to the edge of the network, thus resulting the novel paradigm of Fog Computing.
The paper is organized as follows. Section 2 presents an overview of the state of the art in Cloud computing, IoT, Big data and M2M. Section 3 analyzes the components of a convergent IoT - Cloud computing architecture for E-Health applications, while Section 4 presents Big data processing tools integrated in the previously mentioned architecture. Finally, Section 5 draws the conclusions and presents future directions.
Related work
Starting with the first versions of Cloud computing, IoT and Big Data technologies, it has become apparent that their convergence could lead to a range of new applications, including in eHealth. Most IoT applications are based on M2M communication protocols between large numbers of heterogeneous geographically distributed sensors. As a result, they need to handle many hundreds (sometimes thousands) of sensor streams, and could directly benefit from the immense distributed storage capacities of cloud computing infrastructures. Furthermore, cloud infrastructures could boost the computational capacities of IoT applications, given that several multi-sensor applications need to perform complex processing that is subject to timing (and other QoS constraints). Also, several IoT services (e.g., large scale sensing experiments, smart city applications) could benefit from a utility-based delivery paradigm, which emphasizes the on-demand establishment and delivery of IoT applications over a cloud-based infrastructure.
The proclaimed benefits of the IoT/cloud convergence have (early on) given rise to research efforts that attempted to integrate multi-sensory services into cloud computing infrastructures. The most prominent of these efforts are the so-called sensor-clouds, which blend sensors into the data center of the cloud and accordingly provide service oriented access to sensor data and resources [7]. Several recent research initiatives are focusing on real-life implementation of sensor clouds, including open source implementations [8].
In addition to research efforts towards open source sensor-clouds, there are also a large number of commercial on-line cloud-like infrastructures, which enable end-users to attach their devices on the cloud, while also enabling the development of applications that use those devices and the relevant sensor streams. These systems provide tools for application development, but they offer very poor semantics and no readily available capabilities for utility based delivery. There is also a number of other projects which have been using cloud infrastructures as a medium for Machine-to-Machine (M2M) interactions (e.g., [9]), without however adapting the cloud infrastructure to the needs of the IoT delivery.
In the area of IoT applications (e.g., for smart cities), we are also witnessing cases of IoT/cloud convergence, for example deploying sensor streams over a cloud infrastructure, as means
J Med Syst (2015) 39: 141 Page 3 of 8 141
of benefitting from the clouds storage capacity and application hosting capabilities. A similar approach is followed in the scope of, e.g. the FP7 SmartSantander project [10]. We note that smart cities is a privileged domain for exploring and realizing the IoT/cloud convergence, given that such applications need to manage and exploit a large number of distributed heterogeneous sensor streams and actuating devices.
The plethora of data types and formats (structured, unstructured, semistructured or multistructured) as well as the diversity of generating sources (sensors, devices, social networks, web content, mobile phones, etc.) generate a large variety of data. According to McAfee [11], multistructured data already represents 80 % of the volume of data that is available in an organization.
The economic potential of Big Data represents the greatest challenge and it consists of finding value in the large volume of unstructured data in (near) real time. The tendency to use this information for business analytics is becoming a worldwide management practice.
Fruitful sources of data, recently come to attention (IoT, sensor networks, social networks) have given an unprecedented growth rate of the volume of available microeconomic and macroeconomic data. It is estimated that in the year 2012 alone, 2.5 ExaBytes of data have been generated daily [11].
M2M communications (or Machine Type Communications- MTC) can use either wireless or fixed networks, gaining in recent years a considerable interest among mobile network operators. M2M communication solutions that use mobile networks can prove to be easier to deploy and can support a large number of de-vices, most importantly those with mobility features. The ubiquitous connectivity nature of mobile networks (especially from 3GPP Rel. 8 onwards [12]) will enable M2M services that require reliable and immediate data delivery to distant M2M servers.
The different behaviour of M2M devices, compared to plain mobile network terminals poses a need for optimizing networking solutions, in order to specifically tailor them for M2M communications in mobile networks. Therefore, 3GPP, the Open Mobile Alliance (OMA), IEEE, ETSI and a number of other standardization bodies have launched standardization activities on M2M communications [12].
Regarding 3GPP, the focus is on system optimizations that prevent M2M signalling congestion and network overload. These are main important issues that prevent the mass-market adoption of M2M services.
Furthermore, research in eHealth and Body Area Networks (BANs) has shown that environmental information found in the body of the monitored patient can be utilized to enable secure communication between the active sensors nodes [13]. The body sensors can extract features from some physiological functions like Heart Rate Variance (HRV) or Electrocardiography (ECG) signals as generic sources for
the process of generating a cryptographic key. This approach is known as the Physiological Signal Based key Agreement (PSKA).
The PSKA approach can be utilized to provide an end-toend (E2E) security in eHealth systems (i.e., provide a secure communication channel between the sensors and the back-end medical server or cloud). This approach is also denoted as Physiology-based End-to-End Security (PEES) [14]. In PEES the sensors utilize the features of physiological functions to encrypt (hide) the keying material through a cryptographic primitive called the vault. At the medical server or cloud, the vault is deciphered with a diagnostically equivalent physiological signal time-series. This physiological signal time-series is synthetically generated using a generative model that has been parameterised with the primary users physiological information (e.g., ECG, HRV).
One has to pay particular attention to the BAN when the security is based on a PSKA approach. As study [15] proved that, when two individuals are in close proximity, the electrocardiogram (ECG) of one person gets coupled to the electroencephalogram (EEG) of the other, thus indicating a possibility of proximity-based security attacks. It was proven that the proximity-based attacks can be successful even without the exact reconstruction of the physiological data sensed by the attacked BAN.
Most of the advances in Wireless Sensor Networks (WSN) and BAN security either lack performance (like computational efficiency energy efficiency, reliability and security) or applicability and scalability that will enable and provide implementation on real-world healthcare systems. Advanced security frameworks for WSN and BAN communication can be developed by incorporating and developing novel and efficient security solutions based on the notions of Physical Unclonable Functions (PUF) and PSKA. These kinds of frameworks should be capable of combatting and mitigating the common security problems of applicability, scalability, energy efficiency, low runtime, reliability etc.
Additionally, cloud computing is following the decentralized approach of IoT and research papers address the importance of fog computing in new network architecture. Hence, authors in [16] explain the benefits of and how to migrate from a cloud based architecture to a cloud and fog-based architecture and propose a novel algorithm that helps networks in the migration process. The authors in [17] use fog computing for web page optimization by minimizing the HTTP requests based on the unique knowledge of the edge nodes (e.g. devices computing power, network status). A more exhaustive research is presented in [18] where the authors evaluate the performances of mobile devices that share their heterogeneous resources (e.g. CPU, bandwidth, content). The authors propose also an architecture and a mathematical framework for resource sharing based on the idea of service oriented key functions. As security is one of the key topics in
141 Page 4 of 8 J Med Syst (2015) 39: 141
the IoTarea, there are a number of papers on this topic [19, 20] or on how to improve cloud security using fog nodes [21].
The main characteristics of Fog Computing, as explained in [22, 23], are: location awareness, low latency, real-time interactions or heterogeneity support.
Components of the converged cloud IoT e-health architecture
In this section we will describe the components of and edge-oriented cloud architecture in which the end-user clients or near-user edge devices collaborate in order to process and store data locally or to shorten the communications links for a latency reduction, thus, obtaining a better quality of service .
To illustrate the need for interconnecting extremely heterogeneous sensors and gateways we consider the following example scenarios, which show short-term realistic applications of smart things enabled by M2M communication:
& Continuous care: Citizens affected by chronic diseases can be provided with sensors continuously monitoring relevant health parameters, which produce data that are conveyed through a smart phone to a remote centre for real-time analysis. In case a potentially dangerous situation is detected an alarm can be fed back to the smart phone.
& Ambient assisted living: Activity detection sensors installed in houses where senior or disabled citizens live send data to a remote centre, so as to generate alarms in case anomalies are detected with respect to a typical pattern (e.g., prolonged lack of movement during daytime).
& other scenarios: Smart grid (using IT&C technology for monitoring electrical grid to improve the efficiency of energy, reliability, economics, and sustainability of the, Traffic management (planning, control and monitoring of transport logistics), Fleet management (vehicle maintenance, telematics and safety management).
Cloud architecture
In this section we introduce SlapOS [24], providing an overview of the decentralized cloud system and are going, in particular, to explain the role of Master node and Slave nodes, as well as the software components which they rely on to operate a distributed cloud for E-Health applications.
Slave nodes request to Master nodes which software they should install, which software they show run and report to Master node how much resources each running software has been using for a certain period of time.
Master nodes keep track of available slave node capacity and available software. Master node also act as Web portals and Web
service so that end users and software bots can request software instances which are instantiated and run on Slave nodes.
Master nodes are stateful. Slave nodes are stateless. More precisely, all information required to rebuild a Slave node is stored in the Master node. This may include the URL of a backup service which keeps an online copy of data so that in case of failure of a Slave node, a replacement Slave node can be rebuilt with the same data.
It is thus very important to make sure that the state data present in Master node is well protected. This could be implemented by hosting Master node on a trusted IaaS infrastructure with redundant resource. Or - better - by hosting multiple Master nodes on many Slave nodes located in different regions of the world thanks to appropriate data redundancy heuristic. We are touching here the first reflexive nature of SlapOS. A SlapOS master is normally a running instance of SlapOS Master software instantiated on a collection of Slave nodes which, together, form a trusted hosting infrastructure. In other terms, SlapOS is self-hosted.
SlapOS Slave nodes are relatively simple compared to the Master node. Every slave node needs to run software requested by the Master node. It is thus on the Slave nodes that software is installed. To save disk space, Slave nodes only install the software which they really need.
M2M architecture for e-health
In Fig. 1 we present the general structure of the system that we propose for the tele-monitoring of health related parameters. At each of the monitored installation sites a system is set up composed mainly from distant RTUs, sensors and actuators. RTUs capable to communicate with the Gateway through GSMGPRS and Internet will be used in standard configurations.
The systems key elements are:
& Gateway, which ensures the communication with the
RTUs and available re-source management;& Presentation Service (PS) which is hosted on a computer with server features (for example, unattended operation 24/7), equipped with a software package focused mainly on data presentation in various forms, entirely available to users.& Application Service (AS), focused on special tasks, which PS cannot perform.
Practically, all system communication is done through the Internet and this gives the system investment and mostly operational advantages. It is mentioned that the users can access the processed data, offered by the PS and AS anywhere and anytime, from any terminal with Internet access (PC, tablet, mobile phone etc.).
The systems central elements are configured and scaled so that they would allow a system capacity of 100 RTUs. The
J Med Syst (2015) 39: 141 Page 5 of 8 141
Fig. 1 General structure of the E-Health system
Gateway
Applica on Service
GSM - GPRS
Internet
RTU
Presenta on Service
Sensors and Actuators
User User
system consists also of a wearable medical module, which is equipped with a pulse oximeter and accelerometer, interfaced with the microcontroller throughout a customized shield on top of which a ZigBee module is plugged. It has the purpose of monitoring the movement speed and direction, pulse and blood oxygen saturation. All the ensemble is powered by an AAA battery, as shown in Fig. 2.
All the data from the modules is processed in the cloud node and relevant metadata is extracted (i.e. for the wearable module we process the acceleration on x,y and z axes and extract the integral of the modulus of body acceleration or fall information) and sent to the cloud platform or to a notification manager running on the AS.
Big data processing for convergent cloud IoT e-health applications characteristicsand discussions
Currently there are many solutions on the market for search and analysis of large volumes of information. These solutions
are focused on semantic technologies for aggregating and collating data, both structured and unstructured [25].
A cloud middleware layer can be developed based on three sub-layers, as follows: local cloud components on slave sensors node, edge cloud components on master gateway nodes and central cloud components on application / datastore server nodes. The resulting cloud architecture for the interconnection with a healthcare platform is illustrated in Fig. 3.
Cloud middleware enables heterogeneous devices to act as data sources and to integrate data from other healthcare platforms. Being located at the intersection between central and local cloud, the cloud middleware is designed to implement home gateway functionality and to ensure the security of user information. The local cloud enables the interconnection with different devices through standardized network protocols and interfaces.
Healthcare Pla orms (epSOS, etc.)
Standards and specica ons (Con nua Health Alliance)
Central cloud
Big Data Storage
CloudMiddleware Security
Local / Device Cloud
Fig. 2 General structure of the wearable medical module
Edge cloud
Fig. 3 Big data processing on convergent IoT / Cloud E-Health architecture
141 Page 6 of 8 J Med Syst (2015) 39: 141
In our approach we use CloudView Exalead [26], a search platform which offers wide access to information on infrastructure level and is used for both SBA (Search Based Application) for online and enterprise level. This application combines semantic technologies for developing applications such as drag-and-drop, as well as qualitative and quantitative analysis to provide information to the user.
Located at the intersection between search and Business Intelligence, CloudView is also a platform for the Exalead search engine, which was designed to implement semantic processing and selective navigation data volumes that are on the Web, making it easy for users searching and analysing information and enabling organizations to improve their knowledge and resources exploitation.
Exalead CloudView is a tool that allows the exploitation of large volumes of structured and unstructured data from different sources available to search applications, and enables their presentation in an intuitive search interface. CloudView uses semantic technology platform combining formats, structures and terminologies in a modular index that provides continuous access and use of resources through a server distribution and data redundancy.
CloudView accesses documents from different information sources using so-called connectors. Each connector uses protocol data source to connect their own information source and access the documents to be indexed. With connectors user can add new documents, update and delete existing documents, extract current list of indexed documents and manage data security.
CloudView provides multiple interfaces for data management and application configuration itself. These interfaces are represented by several APIs (Application Programming Interface), including a Push API for creating custom connector, a Search API with a role in the development of other applications Search and Management API for configuring and managing the processes of indexing and search. Push API allows data from any source to be indexed via CloudView and supports basic operations required for the development of new connectors, both managed and unmanaged.
A connector is part of the managed code that runs in CloudView code that can be developed in Java. A connector is an external unmanaged sending data CloudView using Push API. An unmanaged connector can be developed in any language or through customers Push APIs (available in Java, C# and PHP) or through HTTP API.
Figure 4 provides a simplified view of the indexing process. It works as follows: connectors accessing data sources convert files in documents and send them to the Push Server that divides documents into groups (analysis).
Analysis is performed sequentially, each document being processed. It makes retrieval of text, semantic processing, and other custom operations and location. The analysis also calculates the data to update the index. Once indexing is done and updated, the new documents are available for searching.
Connectors operate early in the process of indexing of CloudView. They send documents from different sources to a Push API server through PAPI protocol. When creating new connectors the connector server that will host these connectors must be specified. If the connector server is already existing, all new connectors will be automatically associated with it.
With the advent of cloud computing, the convergence of the cloud computing with WSN has been attempted, as an extension of the sensor grid concept in the scope of on-demand elastic cloud based environments. In particular, the convergence of cloud computing with WSN aims at compromising the radically differences and conflicting properties of the two technologies. In particular, sensor networks are location specific, resource constrained, expensive (in terms of development/deployment cost) and generally inflexible in terms of resource access and availability. On the contrary, cloud based infrastructures are location independent provide a wealth of inexpensive resources, as well as rapid elasticity.
Fine-grained (raw) data have to be conveyed in a centralized manner over the Internet from sensors up to the remote center, so as to give the latter the high-resolution information it needs to take decisions. Therefore, the things and gate-ways are effectively separated from the back-end, which has storage
Search applications
Fig. 4 Simplified view of the indexing process
1
Indexing Search
2
Connectors Search API
Push API
Localization
Analysis
3
4
Indexing End user
Data sources
J Med Syst (2015) 39: 141 Page 7 of 8 141
and computation functions, both physically, through the Internet, and logically via a set of abstraction layers. Such an approach has worked well in the past for similar technologies, like the Web, from which the initial IoT attempts inherit most of their design and characteristics. However, we argue that full and sustainable exploitation of M2M applications needs a tighter integration between the real and virtual world. The following issues of what we can call Bthe current approach^ can be identified: scalability, security, reliability, QoS, resource/energy efficiency and multi-domain implementation.
To overcome the limitations of the current systems for M2M applications, we propose a novel approach based on the following principles:
1. Storage and processing of data are as close as possible, in space and time, to where they are generated and consumed.
2. Important non-functional requirements, namely security, reliability and QoS, are supported by means of a tight integration of the high-level services pro-vided with the physical resources of the peripheral devices, i.e., things and gateways.
3. Energy efficiency and scalability of the systems are achieved through the distribution of on-the-spot inferred content, rather than raw data.
4. Cross-domain applications using real-time data from multiple sources can be seamlessly implemented.
The current state of art of M2M platforms is quite fragmented and there isnt a single view toward an interoperable smart object world. The M2M commercial platforms are vertically focused on solving specific sector issues and are tightly integrated with applications. This approach, taken from the telemetry legacy applications, has created a bunch of sensor devices not interoperable one each other, with high boundaries and integration possible only at database or presentation layers.
For many years, M2M deployments were based on proprietary/custom applications and networking infrastructures, which were typically expensive to build, manage and maintain. Todays market for sensor devices is a hotbed of idea generation, as the prospect of embedding intelligence in the form of M2M technology into mobile devices has everyone excited about the possibilities. The current market is already filled up with devices that can track everything from blood-glucose levels to traffic patterns.
Conclusion
The paper discussed a number of popular ICT paradigms, including Cloud computing, IoT and Big Data. It provided an extensive state of the art review of them and the
convergence between them. Next, we proposed a M2M system based on a decentralized cloud architecture, general systems and Remote Telemetry Units (RTUs) for E-Health applications. The system was built for Big Data processing of sensors information in the way that data can be aggregated to generate Bvirtual^ sensors, and some measurement results were presented.
With the advent of smart mobile devices, the Internet access has become ubiquitous, and has opened the way for new applications that use M2M communications. Indeed, we envision our work on the Internet of Things paradigm, as an evolution of the traditional Internet for seamlessly integrating most things around us and collecting big data from sensors that track everything happening in our environment.
Acknowledgments This work has been funded by the Sectoral Operational Programme Human Resources Development 20072013 of the Ministry of European Funds through the Financial Agreement POSDRU/159/1.5/S/134398, POSDRU/159/1.5/S/134397 and POSDRU/159/1.5/S/132395 and supported in part by UEFISCDI Romania under grants no. 262EU/2013 BeWALL^ support project, grant no. 337E/2014 BAccelerate^ project and by European Commission by FP7 IP project no. 610658/2013 BeWALL for Active Long Living - eWALL^.
References
1. Kahn, E., Natural language processing, big data, bioinformatics and biology. Int. J. Biol. Biomed. Eng. 8:107117, 2014.
2. Ochian, A., Suciu, G., Fratu, O., and Suciu, V., Big data search for environmental telemetry. In: Communications and Networking (BlackSeaCom), IEEE International Black Sea Conference on, 182184, 2014.
3. Vermesan, O., Friess, P., Guillemi, P., and Gusmeroli, S., Internet of things strategic research agenda. In: Internet of Things Global Technological and Societal Trends. River Publishers, 2011.
4. Suciu, G., Halunga, S., Fratu, O., Vasilescu, A., and Suciu, V., Study for renewable energy telemetry using a decentralized cloud M2M system. In: Wireless Personal Multimedia Communications (WPMC), IEEE 15th International Symposium on, 15, 2013.
5. McFedries, P., The cloud is the computer. IEEE Spectr. 45(8):20 22, 2008.
6. CISCO: visual networking index global mobile data traffic forecast 20142019, CISCO whitepaper, 2015.
7. Hassan, M. M., Song, B., and Huh, E. N., A framework of sensor-cloud integration opportunities and challenges. In: Proceedings of International Conference Ubiquitous Information Management Communication, 618626, 2009.
8. Fox, G. C., Kamburugamuve, S., and Hartman, R. D., Architecture and measured characteristics of a cloud based internet of things. In: IEEE Collaboration Technologies and Systems (CTS), International Conference on, 612, 2012
9. Kranz, M., Holleis, P., and Schmidt, A., Embedded interaction -interacting with the internet of things. IEEE Internet Comput. 14(2):4653, 2010.
10. Jara, A. J., Genoud, D., and Bocchi, Y., Sensors data fusion for smart cities with KNIME - a real experience in the SmartSantander Testbed. In: Internet of Things (WF-IoT), 2014 I.E. World Forum on, 173174, 2014.
141 Page 8 of 8 J Med Syst (2015) 39: 141
11. McAfee, A., Brynjolfsson, E., Davenport, T. H., Patil, D. J., and Barton, D., Big data. The management revolution. Harv. Bus. Rev. 90(10):6167, 2012.
12. The 3rd Generation Partnership Project (3GPP), TS 23.888 BSystem improvements for Machine-Type Communications (MTC),^ Version 11.0.0, 2012
13. Venkatasubramanian, K. K., Banerjee, A., and Gupta, S. K. S., Pska: Usable and secure key agreement scheme for body area networks. IEEE Trans. Inf. Technol. Biomed. 14(1):6068, 2010.
14. Banerjee, A., Gupta, S., Venkatasubramanian, K. K. PEES: Physiology-based End-to-End Security for mHealth. In: The Wireless Health Academic/Industry Conference, 18, 2013.
15. Bagade, P., Banerjee, A., Milazzo, J., and Gupta S. K. S., Protect your BSN: No Handshakes, just Namaste!. In: Proc. 2013 I.E. International Conference on Body Sensor Networks (BSN), 16, 2013.
16. Ottenwlder, B., Koldehofe, B., Rothermel, K., and Ramachandran,U. MigCEP: operator migration for mobility driven distributed complex event processing. In: 7th ACM Int. Conf. Distributed Event-based Systems, 183194, 2013.17. Zhu, J., Chan, D.S., Prabhu, M.S., Natarajan, P., Hao, H., and Bonomi, P., Improving web sites performance using edge servers in fog computing architecture. In: 7th IEEE International Symposium on Service Oriented System Engineering (SOSE), 32023, 2013.
18. Nishio, T., Shinkuma, R., Takahashi, T., and Mandayam, N.B., Service-oriented heterogeneous resource sharing for optimizing service latency in mobile cloud. In: 1st ACM International Workshop on Mobile Cloud Computing & Networking, 1926, 2013
19. Dsouza, C., Ahn, G.-J., and Taguinod, M., Policy-driven security management for fog computing: preliminary framework and a case study. In: 15th IEEE International Conference on Information Reuse and Integration (IRI), pp. 1623 (2014)
20. Stojmenovic, I., and Sheng W., The fog computing paradigm: scenarios and security issues. In: Federated Conference on Computer Science and Information Systems (FedCSIS), 18 2014.
21. Stolfo, S.J., Salem, M.B., and Keromytis, A.D., Fog computing: mitigating insider data theft attacks in the cloud. In: IEEE Symposium on Security and Privacy Workshops (SPW), 125 28, 2012.
22. Bonomi, F., Milito, R., Zhu, J., and Addepalli, S., Fog computing and its role in the internet of things. In: Proceedings of the first edition of the MCC workshop on Mobile cloud computing, 13 16, 2012.
23. Stojmenovic, I., Fog computing: a cloud to the ground support for smart things and machine-to-machine networks. In: Telecommunication Networks and Applications Conference (ATNAC), 11722, 2014.
24. Saad, W., Abbes, H., Jemni, M., and Cerin, C., Designing and implementing a cloud-hosted SaaS for data movement and sharing with SlapOS. Int. J. Big Data Intell. 1(2):1835, 2014.
25. Shah, T., Rabhi, F., and Ray, P., Investigating an ontology-based approach for big data analysis of inter-dependent medical and oral health conditions. In: Cluster Computing, 117, 2014.
26. Eckstein, R. Interactive search processes in complex work situations - a retrieval framework, In: University of Bamberg Press, vol. 10, 6267, 2011.
Springer Science+Business Media New York 2015