In our lab, we are open to conducting a range of research that doesn’t fit into our otherwise broad portfolio. Here you will find information about publications from these projects.
Abstract: In this work, we explore the ability to estimate vehicle fuel consumption using imagery from overhead fisheye lens cameras deployed as traffic sensors. We utilize this information to simulate vision-based control of a traffic intersection, with a goal of improving fuel economy with minimal impact to mobility. We introduce the ORNL Overhead Vehicle Data set (OOVD), consisting of a data set of paired, labeled vehicle images from a ground-based camera and an overhead fisheye lens traffic camera. The data set includes segmentation masks based on Gaussian mixture models for vehicle detection. We show the data set utility through three applications—estimation of fuel consumption based on segmentation bounding boxes, vehicle discrimination for vehicles with large bounding boxes, and fine-grained classification on a limited number of vehicle makes and models using a pre-trained set of convolutional neural network models. We compare these results with estimates based on a large open-source data set of web-scraped imagery. Finally, we show the utility of the approach using reinforcement learning in a traffic simulator using the open source Simulation of Urban Mobility (SUMO) package. Our results demonstrate the feasibility of the approach for controlling traffic lights for better fuel efficiency based solely on visual vehicle estimates from commercial, fisheye lens cameras.
Abstract: When network infrastructure is down after disasters such as hurricane Maria, in the face of extreme censorship and in remote areas without infrastructure novel solutions for large scale delay tolerant communications are needed. Nation Scale Mobile Ad Hoc Network, or NSHoc, enables smartphone users to request and receive content via opportunistic encounters at nation scale with no prior knowledge of network members and in sparse topologies where individual nodes may remain isolated for minutes or even hours at a time. We call such sparse topologies normally isolated. It does so by leveraging mobile ad hoc networks that rely on opportunistic encounters between users to distribute content. We use a custom simulator to test the system over two nation scale topologies, Puerto Rico and Syria. With 10K users, NSHoc can deliver over 95% of requested content to over 97% of users in 143 locations spread throughout Puerto Rico in less than 5 hours on average with a total throughput of .42 pieces of content per second. Significantly, these results are not simply the consequence of popular content being cached. We demonstrate that requests for unpopular content are also satisfied due to the benefits of ubiquitous caching. In addition, we show that NSHoc remains performant across a variety of topologies, mobility models and content distributions. No known prior work considers such large scale, sparse topologies. This work shows that MANETs are an attractive alternative for distributing content at nation scale in the face of infrastructure loss even when users are normally isolated.
Abstract: Outlier detection has been shown to be a promising machine learning technique for a diverse array of fields and problem areas. However, traditional, supervised outlier detection is not well suited for problems such as network intrusion detection, where proper labelled data is scarce. This has created a focus on extending these approaches to be unsupervised, removing the need for explicit labels, but at a cost of poorer performance compared to their supervised counterparts. Recent work has explored ways of making up for this, such as creating ensembles of diverse models, or even diverse learning algorithms, to jointly classify data. While using unsupervised, heterogeneous ensembles of learning algorithms has been proposed as a viable next step for research, the implications of how these ensembles are built and used has not been explored.
Abstract: The World Wide Web has become the most common platform for building applications and delivering content. Yet despite years of research, the web continues to face severe security challenges related to data integrity and confidentiality. Rather than continuing the exploit-and-patch cycle, we propose addressing these challenges at an architectural level, by supplementing the web's existing connection-based and server-based security models with a new approach: content-based security. With this approach, content is directly signed and encrypted at rest, enabling it to be delivered via any path and then validated by the browser. We explore how this new architectural approach can be applied to the web and analyze its security benefits. We then discuss a broad research agenda to realize this vision and the challenges that must be overcome.