PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 10652 including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Transcribing voice communications in NASA’s launch control center is important for information utilization. However, automatic speech recognition in this environment is particularly challenging due to the lack of training data, unfamiliar words in acronyms, multiple different speakers and accents, and conversational characteristics of speaking. We used bidirectional deep recurrent neural networks to train and test speech recognition performance. We showed that data augmentation and custom language models can improve speech recognition accuracy. Transcribing communications from the launch control center will help the machine analyze information and accelerate knowledge generation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer-based facial recognition algorithms exploit the unique characteristics of faces in images. However, in non-cooperative situations these unique characteristics are often disturbed. In this study, we examine the effect of six different factors on face detection in an unconstrained imaging environment: image brightness, image contrast, focus measure, eyewear, gender, and occlusion. The aim of this study is twofold: first, to quantify detection rates of conventional Haar cascade algorithms across these six factors; and second, to propose methods for automatically labeling datasets whose size prohibits manual labeling. First, we manually classify a uniquely challenging dataset comprising 9,688 images of passengers in vehicles acquired from a roadside camera system. Next, we quantify how each of the aforementioned factors affect face detection on this dataset. Of the six factors studied, occlusion had the most significant impact, resulting in a 54% decrease in detection rate between unoccluded and severely occluded faces in our unique dataset. Finally, we provide a methodology for data analytics of large datasets where manual labeling of the whole dataset is not possible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The growing interest for Internet of Things (IoT) both in public and private sector has introduced many research challenges in the areas of computational science, machine learning, cyber security and networking. The primary factor for having a robust IoT application is to keep the network alive, i.e., to ensure there is sufficient number of nodes up and running. This enables necessary data sessions from source IoT nodes to destination IoT nodes required for data fusion and artificial reasoning for intelligent application running over this ad-hoc network. In this paper, we propose our autonomous wireless charger platform in which a autonomous network unit will traverse through the critical area of the network and charges the IoT devices to ensure the network is alive. We will present a cross-layered optimization framework which jointly performs wireless charging power management, scheduling, interference avoidance, and routing. Our objective is to maximize the minimum sessions bottle neck, while all the nodes are alive within a specific interval.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Internet of Things concept is described as a network of interconnected physical objects capable of gather, process, and communicate information about their environment, and potentially affect the physical world around them through their sensors, embedded processors, communication modules, and actuators, respectively. Such a network can provide vital information on events, processes, activities, and future projections about the state of a distributed system. In addition, it can give the devices inside the network awareness about their environment far beyond the range of their dedicated sensors through communication with other devices. In most cases, such network consists of devices with different processing and communication capacities and protocols, from a variety of hardware vendors. This paper introduces an abstracted messaging and commanding framework for smart objects, aimed towards making the network capable of including various communication standards. This issue is addressed by proposing a messaging structure based on JavaScript object notation (JSON) format so the new devices connecting to the network can introduce themselves to the central coordinator. The introduction includes a list of functionalities that the device is capable of, and the information it needs to carry out those tasks. This platform makes the network capable of incorporating different devices with various purposes and functions with ease and flexibility. Having a fast, reliable, and scalable communication scheme is critical for realization of a robust and flexible network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rise of small, networked sensors, the volume of data generated increasingly require curation by AI to analyze which events are of sufficient importance to report to human operators. We consider the ultimate limit of edge computing, when it is impractical to employ external resources for the curation, but individual devices have insufficient computing resources to perform the analytics themselves. In a previous paper we introduced a decenralized method that distributes the analytics over the network of devices, employing simulated annealing, based on physics-inspired Metropolis Monte Carlo. If the present paper we discuss the capability of this method to balance the energy consumption of the placement on a network of heterogeneous resources. We introduce the balanced utilization index (BUI), an adaptation of Jain’s Fairness Index, to measure this balance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present OpenTap, a unified interface designed as an Infrastructure layer technology for a software-defined network measurement (SDNM) stack. OpenTap provides invocations for remotely capturing network data at various granularities, such as packet or NetFlow. OpenTap drivers can be developed that leverage open source network measurement tools such as tcpdump and nfdump. OpenTap software can be used to turn any computing device with network interfaces into a remotely controlled network data collection device. Although OpenTap was designed for SDNM, its interface generalizes to any data acquisition thereby providing software-defined data acquisition (SDDA). We illustrate this generality with OpenTap drivers that leverage Phidgets USB sensors to remotely capture environmental data such as temperature. We have completed an implementation of OpenTap that uses a REST API for the invocations. Using that implementation, we study a few use cases of OpenTap for automated network management and network traffic visualizations to characterize its utility for those applications. We find that OpenTap empowers rapid development of software for more complex network measurement functionality at the Control layer such as, joining network data with other sources, and creating network data aggregates such as traffic matrices. OpenTap significantly lowers the cost and development barrier to large-scale data acquisition thereby bringing data acquisition and analytics to an unprecedented number of users. Finally, at the Application layer, network measurement applications such as traffic matrix visualizations are easily implemented leveraging OpenTap at the Infrastructure layer in addition to the Control layer. All of these data processing software systems will be open source and available on GitHub by the time of the conference.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tensor decompositions are a class of algorithms used for unsupervised pattern discovery. Structured, multidimensional datasets are encoded as tensors and decomposed into discrete, coherent patterns captured as weighted collections of high-dimensional vectors known as components. Tensor decompositions have recently shown promising results when addressing problems related to data comprehension and anomaly discovery in cybersecurity and intelligence analysis. However, analysis of Big Data tensor decompositions is currently a critical bottleneck owing to the volume and variety of unlabeled patterns that are produced. We present an approach to automated component clustering and classification based on the Latent Dirichlet Allocation (LDA) topic modeling technique and show example applications to representative cybersecurity and geospatial datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optimized Quantum Key Distribution (QKD) protocols revolutionize the cyber security by leveraging the quantum phenomenon for development of unbreakable security. Configurable quantum networks are necessary for accessible quantum applications amongst multiple users. Quantum key distribution is particularly interesting because of the many ways in which the key exchange can be carried out. Keys can be exchanged by encoding the key into a weak photon source using classical methods, or the keys can be exchanged using pairs of photons entangled at the source, or the keys can even be exchanged by encoding with classical hardware at the source with an entangling measurement which occurs at the photons destination. Each type of quantum key exchange has its own requirements that must be met for point-to-point implementations which makes it exceedingly difficult to implement multi-node quantum networks. We propose a programmable network model to time encoded quantum key distribution; this version of QKD sends entangled photons to two users and the hardware is setup such that the relative time shift in the coincident photons encodes which measurement basis was used. The protocols were first simulated by modifying previous software which used the CHP quantum simulator, and then a point-to-point key exchange was setup in hardware to demonstrate the time encoding aspects of the protocol.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the design and implementation of a software-defined quantum networking protocol and software switch integrated with a numerical quantum channel simulator. Our protocol design leverages recent advances in the OpenFlow protocol that enable software-defined control and management of optical network traffic using side-channel metadata. We implement this design using customization of the open source vSwitch for optical network routing, and we test the implementation using a numerical simulator of the quantum channel alongside actual network traffic. Our results support the integration of quantum communication with existing optical transport methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, we investigate and compare centrality metrics on several datasets. Many real-world complex systems can be addressed using a graph-based analytical approach, where nodes represent the components of the system and edges are the interactions or relationships between them. Different systems such as communication networks and critical infrastructure are known to exhibit common characteristics in their behavior and structure. Infrastructure networks such as power girds, communication networks and natural gas are interdependent. These systems are usually coupled such that failures in one network can propagate and affect the entire system. The purpose of this analysis is to perform a metric analysis on synthetic infrastructure data. Our view of critical infrastructure systems holds that the function of each system, and especially continuity of that function, is of primary importance. In this work, we view an infrastructure as a collection of interconnected components that work together as a system to achieve a domain-specific function. The importance of a single component within an infrastructure system is based on how it contributes, which we assess with centrality metrics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a resilient detection algorithm for multiple targets in a distributed environment with limited information sharing. The problem setup is as follows. There are M agents (detectors/sensors), which will be used to collaboratively detect the behaviors of N targets. The number of agents is much smaller than that of targets (i.e., M << N). Targets are assumed to be located in a 2D environment (The extension to 3D is straightforward). Each agent has a limited sensing/communication range and can only detect a small group of targets in its sensing range. Agents only maintain a strongly connected communication topology at certain time intervals and each agent can communicate with its neighboring agents about their situation of target detection. The proposed distributed detection algorithm is based on consensus theory. The resilience of the proposed detection algorithm is verified through extensive simulations under four different scenarios: (1) agents with limited sensing/limited communication capabilities; (2) the existence of unexpected agent failure; (3) the existence of unexpected communication link dropout; and (4) the situation with intermittent communications. The proposed design provides a new solution for control and estimation of unmanned autonomous systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Unmanned Aerial Systems (UASs) are becoming increasingly popular for amateur use, but their arbitrary deployment poses severe public safety threats to critical infrastructures, such as airports. Typically an Amateur Unmanned Aerial System (AUAS) communicates with a ground control station (GCS) through a telemetry radio, which keeps transmitting data in poor connection conditions. The accuracy of AUASs detection is of great significance. In this paper, we propose a novel surveillance framework which leverages Surveillance Unmanned Aerial Systems (SUASs) to detect AUASs. The approximate position of an AUAS is first estimated by Ground Surveillance Nodes (GSNs) with radio receivers, and SUASs are then activated to determine its precise position. Different from previous research, this framework not only leverages both ground and aerial surveillance capabilities, but also integrates both radio and image processing techniques, thus achieving enhanced AUAS detection capability. This platform has the potential to be integrated with other advanced technologies, providing the recognition of radio signals and imagery for a holistic solution of effective AUAS detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Unmanned aerial vehicles (UAVs), commonly known as drones, have the potential to enable a wide variety of beneficial applications in areas such as monitoring and inspection of physical infrastructure, smart emergency/disaster response, agriculture support, and observation and study of weather phenomena including severe storms, among others. However, the increasing deployment of amateur UAVs (AUAVs) places the public safety at risk. A promising solution is to deploy surveillance UAVs (SUAVs) for the detection, localization, tracking, jamming and hunting of AUAVs. Accurate localization and tracking of AUAV is the key to the success of AUAV surveillance. In this article, we propose a novel framework for accurate localization and tracking of AUAV enabled by cooperating SUAVs. At the heart of the framework is a localization algorithm called cooperation coordinate separation interactive multiple model extended Kalman filter (CoCS-IMMEKF). This algorithm simplifies the set of multiple models and eliminates the model competition of each motion direction by coordinate separation. At the same time, this algorithm leverages the advantages of fusing multi-SUAV cooperative detection to improve the algorithm accuracy. Compared with the classical interacting multiple model unscented Kalman filter (IMMUKF) algorithm, this algorithm achieves better target estimation accuracy and higher computational efficiency, and enables good adaptability in SUAV system target localization and tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A key component of the Third Offset Strategy proposed by the United States Department of Defense is the use of unmanned autonomous systems to deter potential conflicts. Collaborative autonomy technologies are also being explored by the private sector, which is rapidly pushing towards the deployment of self-driving vehicles. For areas affected by disaster, autonomous drone swarms can assist with search and rescue operations by surveilling large regions quickly without exposing emergency responders to risk prematurely. A substantial amount of progress has been made in distributed sensing research over the last few years. However, simulation results for applications that require complex inter-agent communications have rarely been demonstrated at scale; these simulations are generally executed using tens or hundreds of agents rather than the thousands or tens of thousands envisioned for large autonomous swarms. We address this deficit here by presenting two contributions. First, we extend our previous work on efficient, distributed algorithms for weak radiation source detection to accommodate the use case of surveillance across a very wide area. We then demonstrate the efficacy of the proposed algorithms at scale using a parallelized version of the ns-3 discrete event simulator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With more than three billion people accessing the internet each day, cyber crime is becoming an increasingly pervasive and complex threat government and business encounter. Organizations must secure their enterprise using the most advanced cybersecurity and analytic tools available. This purpose of this research effort was to create platform using open source tools and code with the capability to data mine the DarkNet, the internet’s deepest layer. For most of us, the internet is a place to learn, communicate, and transact business. We routinely enter personal data including medical histories, credit card information, and photos of loved ones. For malicious hackers, the internet is the ideal environment for cyber crime activity. Private portals are routinely compromised and our information made public to user networks deep within the web. Using a secure algorithm, our tools demonstrate methods to automatically and anonymously data mine one of the largest collections of DarkNet
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Theorized since the 1970’s and brought to public awareness through HP’s tumultuous marketing efforts starting in 2008, the memristor has promised radical disruptions in computing technology. Like almost all new and disruptive technologies, promises failed to materialize as the gap between theory and reality stubbornly refused to come together. Rather than fading into obscurity, interest in the memristor has continued its exponential rise. While it is only natural for one to use a new tool in the same manner as an old familiar one, we argue that the future success of the memristor lies in understanding what makes it unique and in firmly understanding where theoretical idealizations deviate from hard reality. After a brief history of memristor technology we will discuss some of the more disruptive memristor applications with a focus on synaptic processors. We will conclude with a review of the Knowm Inc memristor technology stack and a live demonstration of memristors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Major advancements in computational and sensor hardware have enormously facilitated the generation and collection of research data by scientists - the volume, velocity and variety of Big ’Research’ Data has increased across all disciplines. A visual analytics platform capable of handling extreme-scale data will enable scientists to visualize unwieldy data in an intuitive manner and guide the development of sophisticated and targeted analytics to obtain useable information. Reconfigurable Visual Computing Architecture is an attempt to provide scientists with the ability to analyze the extreme-scale data collected. Reconfigurable Visual Computing Architecture requires the research and development of new interdisciplinary technological tools that integrate data, realtime predictive analytics, visualization, and acceleration on heterogeneous computing platforms. Reconfigurable Visual Computing Architecture will provide scientists with a streamlined visual analytics tool.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Even though legacy scientific code bases have long lifetimes and not going away anytime soon, by 2030, disruptive technologies would have dramatically changed softwarization. Are we ready to tackle tomorrow’s hardware? Relying on traditional programming paradigms will no longer suffice. We need to explore software abstractions and advanced smart techniques in compiler and runtime systems to effectively exploit tomorrow’s hardware. With the advent of neuromorphic chips and quantum computers as effective accelerators to which traditional X86 systems could offload relevant computations, we need to rethink the impact of these disruptive technologies on today’s software and fundamental programming abstractions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We will review cyber-security, and chart a course for the future. To understand the problems of cyber security is to know the risks and the fundamental issues encompassing it. Today, the internet and computers have no boundaries. Data is available locally, remotely, and distributed in the cloud. Connectivity, and sharing across software applications has caused a super-exponential growth in the number of possible ways to exploit systems. There is a great need for seamless cybersecurity protection covering: prevention, identification, and defense against attacks. We need cybersecurity that is simple to use, has low overhead, and really works. We believe the future of cybersecurity is at the hardware level. There are no boundaries to cybersecurity. Cyber attackers, intruders, pranksters, and malicious users can be external, internal, local, next door, or on the other side of the world. The attacker’s goal is to exploit a weakness, remain anonymous, and take advantage of stolen information. Risks are many covering simple to define to beyond the state-of-the art to understand. Traditional attack paths include viruses (malicious code hidden in software), worms (automatically selfpropagate), and Trojan horses (create remote back doors for attackers). More recent attack paths are taking advantage of hardware assisted virtualization, and hardware assisted virtual machines. The New Year 2018 brought, new information leakage attacks called Spectre [3] and Meldown [4]. Malicious code can combine viruses, worms, Trojan horses, scripts, et al. to attack multiple, cross-platform systems. For future cyber security, we need simple cyber mechanism(s) amicable to machine generated formal proofs for the whole system (including hardware and software). We present research directions for the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ear biometrics has known an increase of interest from the computer vision research community in recent years. This is mainly because ear geometric features can be extracted in a non-intrusive way, are unique to each individual and does not change over time. Different techniques were proposed to extract ear features in 2D and 3D space and use them in a person recognition system. In this work, we propose Deep-Ear a deep convolutional residual network to perform ear recognition. The proposed algorithm uses a 50 layers deep residual network (ResNet50) as a feature extractor followed by 2 fully connected layers and a final softmax layer for classification. Experimental tests were performed on AMI-DB ear dataset. The obtained top-1 accuracy is equal to 95.67% and a top-3 accuracy is 99.67%. These results show that the proposed architecture is promising in developing a robust feature-free ear recognition technique based on deep learning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional malware detection is performed by pattern matching files against a database of known signatures. There are several limitations to this approach including zero-day attacks and encryption. We envision an alternative strategy whereby machine learning (ML) models are trained to classify malware on dynamically-derived CPU instruction streams. Many ML algorithms have the potential to recognize code fragments not explicitly seen before. Furthermore, the analysis of dynamic instruction streams (vs. static disassembly) potentially defeats encryption, as encrypted malware must decrypt itself before being operational. In this work, we begin to assess the viability of our vision by using convolution neural networks to classify the function of various types of small programs from their stream of CPU instructions. Intriguingly, we find that a model comprised of a few layers of convolutional filters performs on par with a shallow single-layer convolutional network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Emerging as one of the most contemporary machine learning techniques, deep learning has shown success in areas such as image classification, speech recognition, and even playing games through the use of hierarchical architecture which includes many layers of non-linear information. In this paper, a powerful deep learning pipeline, intelligent deep learning (iDeepLe) is proposed for both regression and classification tasks. iDeepLe is written in Python with the help of various API libraries such as Keras, TensorFlow, and Scikit-Learn. The core idea of the pipeline is inspired by the sequential modeling with considering numerous layers of neurons to build the deep architecture. Each layer in the sequential deep model can perform independently as a module with minimum finitudes and does not limit the performance of the other layers. iDeepLe has the ability of employing grid search, random search, and Bayesian optimization to tune the most significant predictor input variables and hyper-parameters in the deep model via adaptive learning rate optimization algorithms for both accuracy and complexity, while simultaneously solving the unknown parameters of the regression or the classification model. The parallel pipeline of iDeepLe has the capacity to handle big data problems using Apache Spark, Apache Arrow, High Performance Computing (HPC) and GPU-enabled machines as well. In this paper, to show the importance of the optimization in deep learning, an exhaustive study of the impact of hyper-parameters in a simple and a deep model using optimization algorithms with adaptive learning rate was carried out.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Face recognition is a research area that has been widely studied by the computer vision community in the past years. Most of the work deals with close frontal images of the face where facial structures can be easily distinguished. Little work deals with recognizing faces at a distance, where faces are at a very low resolution and barely distinguishable. In this work, we present a deep learning architecture that can be used to enhance lower resolution facial images captured at a distance. The proposed framework uses Deep Convolutional Generative Adversarial Networks (DCGAN). The proposed architecture works well even in the presence of a small number of images for learning. The new enhanced images are then sent to a face recognition algorithm for classification. The proposed framework outperforms classical enhancement techniques and leads to an increase in the face recognition performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Blockchain has proven to be a disruptive technology having a tremendous impact on cryptocurrencies management. The blockchain technology potential is one to be applied in other fields. The development of new/small/cheap SoC processors boards will systematically flood our homes, businesses, buildings and cities with Internet connection capable devices. The M2AP is an architecture that proposes a unified and micromanage ecosystem for these devices. Fully interconnected like a network mesh but better prepare to control the data flow due to its device classification based on purpose and data usage. In this paper it is proposed an M2AP architecture that uses blockchain for identity identification, security and trust delivery management applied to a home automation system. A Trust currency is introduced, a modified blockchain structure is utilized, and a proof of Trust is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, we present a low-latency partition tolerant distributed ledger using a novel directed acyclic graph (DAG) structure. By forcing information to be published at specific deterministic locations, we avoid complex consensus algorithms and eliminate confirmation periods. We implement our DAG over a fully decentralized overlay network, leveraging BitTorrent concepts for peer-to-peer (P2P) rendezvous and block propagation. We compute overall expected latency using Erd¨os-R´enyi random graph models and present initial proof-of-concept software. The resulting work will allow anyone to create fully decentralized services (records, databases, etc.) resistant to attack, manipulation, and censorship.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information technology has transformed our world as an innovation enabler in our daily lives on how we interact with the world and people around us. One of the key driver for this innovation is the exponential growth of computational capability – over the last seventy years starting from ENIAC and then followed with Moore’s law exponent growth. Along with exponential growth of computer hardware, algorithms and methods have also progressed to solve complex science and technology applications that can take advantage of evolving generation of computer hardware. One can easily argue that mathematical algorithms from interdisciplinary sciences played significant role as key enablers for information sciences and information technology revolution. In this talk, we will review key mathematical algorithmic research that played key role over the last 70 years in our evolving computing hardware era - ENIAC, digital computing, vector computing, massively parallel computing, cloud computing, multi-core computing, and mobile computing. Next, we will discuss algorithmic research challenges for future computing hardware that is envisioned to be heterogeneous or hybrid computing consisting of vector computing processors, multi-core computing processors, neuromorphic computing processors, mobile tactical cloudlets, and quantum computing processors with software based intelligent networking as backbone.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computational Intelligence for Mission Assurance and Resilience
The impact of a truly autonomous cyber capability could be profound. The research that will be presented intends to explore the benefits and drawbacks of autonomous cyber capabilities. It presents a method and emphasizes the importance of testing the robustness of the machine learning algorithms which form the basis for autonomous decision making. Robust assessment will help mitigate risks associated with nefarious actors compromising the learning systems. Considerations are also presented concerning the right level of human interaction and the importance of the human machine team. Finally, a brief discussion on how the latest uses of blockchain technologies could be used to improve the security and bound the behavior of cyber autonomous systems are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the last several years, the volume and diversity of cyber attacks on the U.S. commercial and government networks have increased dramatically, including malware, web attacks (e.g., drive-by downloads), zero-day exploits, and men in the middle (e.g., session hijacking). While many tools are available to attackers, cyber criminals increasingly rely on straightforward intrusion approaches (e.g., spear-phishing), employ vast distributed resources (botnets), and hide attack vectors via stepping stone attacks. Detecting such activities and infrastructure represent the most difficult challenge to cyber-security professionals, because these threats are often locally invisible at the isolated subnetworks. Cyber threat detection tools employed in the field today fail to deal with data volume, speed, and diversity of the cyber-attacks. Intrusion Detection Systems (IDS) are ineffective against novel threats, while anomaly-based methods generate large number of false alarms and are difficult to interpret. Supervised algorithms require curated labeled datasets to train their models which do not exist for novel attacks. Yet, the biggest challenge of these systems is a requirement that all of the data be available at a single global repository. The cost of maintaining global repository and associated computation infrastructure becomes unsustainable as the volume of cyber data collection increases. As threat detection solutions are deployed predominantly to analyze local traffic collected within and on the border of a single organization, these tools are unable to detect attacks that are locally invisible, such as attacks cross-cutting organizational boundaries. In this paper, we describe a new computational framework which will enable distributed enterprises to (a) perform local inference computations; and (b) collaborate using global messages and hybrid strategies to detect a wide range of global threats that are not locally visible. First, we present a matrix-based algebra that generalizes a wide range of machine learning algorithms to maximize the breadth of attack phenomena to be detected. We then derive a semi-supervised attack detection model that uses a hybrid collaboration with adaptive local and global computations at distributed repositories to detect global events when it is not possible to move all relevant data into a centralized location. Finally, we propose a feedback model to create active human-in-loop system which integrates cyber analysts into malicious behavior detection and pattern learning process by generating requests for annotation and result examination using small number of representative instances of anomaly and threat detection outcomes
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Embry-Riddle Aeronautical University is working on an ongoing project, Resilient Autonomous Systems (RAS), supported by the Air Force Research Lab (AFRL). The objective of this project is to develop autonomous vehicle command and control (C2) technologies that demonstrate increased resilience in Anti-Access Area Denial (A2/AD) environments. Current automated solutions that offer little autonomous re-planning capability can be inflexible in handling dynamic scenarios in these environments. In this case, increased resiliency is defined as the ability of the system to better operate at or above an acceptable level of performance even in unfavorable environments, such as when encountering intelligent adversaries using Electronic Warfare (EW) and Integrated Air Defenses (IAD). The ERAU team has outlined a number of scenarios that set two teams, red and blue, against each other in a shared simulated environment. In general, the objectives for each team are as follows: The blue team assets must navigate through hostile enemy environments, collecting intelligence and reporting back to base, while minimizing the losses. The red team assets must minimize the loss of intelligence to the blue team while maximizing blue team expenditure in fuel and assets. Scenarios range from an area of 25 km2 with 10 agents on each team, to 4000 km2 and >300 agents on each team.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Time-series signals are central to understand and identify the state of a dynamical system.
They are ubiquitous in many areas related to geosciences, climate, and structural health
monitoring. As a result, the theory and techniques for analyzing and modeling time-series
have vast applications in many different scientific disciplines. One of the key challenges
that the time-series data analysts face is that of information/data overload. Furthermore,
the sheer volume of the time-series data generated at the sensor node makes it difficult to
transport the data to centralized databases. These aspects pose an obstacle for data analysts
in detecting changes in the system response as early as possible. Instead, a workflow for
an efficient and automatic reduction of collected data at sensor nodes can enable timely
analyses and decrease event detection latency. Such a workflow can be useful for many
real-time monitoring and sensing applications. An attractive way to construct a
computationally efficient workflow for automated analysis of time-series data is through
machine learning. In this paper, we present a machine learning framework to construct
models to efficiently reduce the time-series data by means of feature extraction and feature
selection. In the first step of the framework, we apply a feature extraction and feature
filtering algorithm called “Feature Extraction based on Scalable Hypothesis (FRESH)” for
a given time-series data to extract comprehensive time-series signal features and then filter
the resulting features. In second step, we quantify the significance of each filtered feature
for predicting a set of labels/targets. Third, we construct a machine learning classifier,
which takes in important filtered features to classify the time-series signals. The proposed
framework is tested and validated against ultrasonic sensing datasets obtained from
multiphase flow loop experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A technical solution is described for implementing a computer-executed system of association memory matrices to replace the proximal layers of a convolutional neural network (CNN). An example method includes configuring one Associative Memory Matrix (AMM) for each configured layer in the CNN. This one-to-one conversion method motivates the name to the product: the Joint Proximity Association Template (JPAT) for Neural Networks. The invention is a numerically stable soft-ware based implementation that (1) reduces the long training times, (2) reduces the execution time, and (3) produces bidirectional intra-layer connections and potentially, inter-layer connections as well. The method further includes, potentially, forming a single AMM, from the multiple AMMs corresponding to the multiple and proximal layers of the CNN, in anticipation of the well-known Universal Approximation Theorem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper investigates the effect of segmentation errors on off-angle iris recognition. We first segment the inner and outer boundaries of off-angle iris images by fitting the best ellipses to boundaries. Then, we validate the segmentation results using Ground Truth Tool and fix the possible errors. Second, we add random errors on iris segmentation parameters to examine how Hamming distance distribution changes for different amount of segmentation error. In order to add errors, we first group the parameters into three sets as ellipse center (i.e., x, y), minor and major axis (i.e., r1, r2) and orientation (θ) and changed the parameters by adding positive and negative random error as a noise to the ground truth segmentation parameters. Our purpose is to show how performance of iris recognition is affected if there is an error in segmentation and how important is to have a very robust segmentation algorithm for off-angle iris recognition. We test the effect of segmentation errors using our off-angle iris dataset that contains images from -50° to +50° in gaze angles. Based on our experimental results, we observed that error in ellipse center decrease the recognition performance worse than minor and major axis and orientation. The main reason is the normalization process tolerates the small amount of error in minor and major axis and bit shift in matching handles the small variations in the orientation errors. However, if the error in the ellipse orientation exceeds the limit of the bit shift method tolerance, the performance of the off-angle iris recognition dramatically drops.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.