Secure Development

The emergence of artificial intelligence (AI) has significantly transformed the software engineering landscape. AI-enabled coding assistants, such as ChatGPT and GitHub Copilot, play a crucial role in boosting developers’ efficiency and productivity. For instance, when developers are tasked with incorporating encryption operations in their code, they can leverage these AI-powered tools to obtain the necessary boilerplate code. This eliminates the need for developers to spend time learning the intricacies of encryption algorithms and coding them from scratch. By providing ready-to-use code snippets, these tools enable swift and effortless implementation of encryption within software projects.


Publications

Conferences

Abstract:  AI-powered coding assistant tools have revolutionized the software engineering ecosystem. However, prior work has demonstrated that these tools are vulnerable to poisoning attacks. In a poisoning attack, an attacker intentionally injects maliciously crafted insecure code snippets into training datasets to manipulate these tools. The poisoned tools can suggest insecure code to developers, resulting in vulnerabilities in their products that attackers can exploit. However, it is still little understood whether such poisoning attacks against the tools would be practical in real-world settings and how developers address the poisoning attacks during software development. To understand the real-world impact of poisoning attacks on developers who rely on AI-powered coding assistants, we conducted two user studies":" an online survey and an in-lab study. The online survey involved 238 participants, including software developers and computer science students. The survey results revealed widespread adoption of these tools among participants, primarily to enhance coding speed, eliminate repetition, and gain boilerplate code. However, the survey also found that developers may misplace trust in these tools because they overlooked the risk of poisoning attacks. The in-lab study was conducted with 30 professional developers. The developers were asked to complete three programming tasks with a representative type of AI-powered coding assistant tool, running on Visual Studio Code. The in-lab study results showed that developers using a poisoned ChatGPT-like tool were more prone to including insecure code than those using an IntelliCode-like tool or no tool. This demonstrates the strong influence of these tools on the security of generated code. Our study results highlight the need for education and improved coding practices to address new security issues introduced by AI-powered coding assistant tools.
Abstract:  We conducted a user study that compares three secure email tools that share a common user interface and differ only by key management scheme: passwords, public key directory (PKD), and identity-based encryption (IBE). Our work is the first comparative (i.e., A/B) usability evaluation of three different key management schemes and utilizes a standard quantitative metric for cross-system comparisons. We also share qualitative feedback from participants that provides valuable insights into user attitudes regarding each key management approach and secure email generally. The study serves as a model for future secure email research with A/B studies, standard metrics, and the two-person study methodology.
Abstract:  Cloud-hosted databases have many compelling benefits, including high availability, flexible resource allocation, and resiliency to attack, but it requires that cloud tenants cede control of their data to the cloud provider. In this paper, we describe Proactively-secure Accumulo with Cryptographic Enforcement (PACE), a client-side library that cryptographically protects a tenant's data, returning control of that data to the tenant. PACE is a drop-in replacement for Accumulo's APIs and works with Accumulo's row-level security model. We evaluate the performance of PACE, discussing the impact of encryption and signatures on operation throughput.
Abstract:  The current state of certificate-based authentication is messy, with broken authentication in applications and proxies, along with serious flaws in the CA system. To solve these problems, we design TrustBase, an architecture that provides certificate-based authentication as an operating system service, with system administrator control over authentication policy. TrustBase transparently enforces best practices for certificate validation on all applications, while also providing a variety of authentication services to strengthen the CA system. We describe a research prototype of TrustBase for Linux, which uses a loadable kernel module to intercept traffic in the socket layer, then consults a user-space policy engine to evaluate certificate validity using a variety of plugins. We evaluate the security of TrustBase, including a threat analysis, application coverage, and hardening of the Linux prototype. We also describe prototypes of TrustBase for Android and Windows, illustrating the generality of our approach. We show that TrustBase has negligible overhead and universal compatibility with applications. We demonstrate its utility by describing eight authentication services that extend CA hardening to all applications.
Abstract:  Developing secure software is inherently difficult, and is further hampered by a rush to market, the lack of cybersecurity-trained architects and developers, and the difficulty of identifying flaws and deploying mitigations. To address these problems, we advocate for an alternative paradigm-layering security onto applications from global control points, such as the browser, operating system, or network. This approach adds security to existing applications, relieving developers of this burden. The benefits of this paradigm are three-fold-(1) increased correctness in the implementation of security features, (2) coverage for all software, even non-maintained legacy software, and (3) more rapid and consistent deployment of threat mitigations and new security features. To demonstrate these benefits, we describe three concrete instantiations of this paradigm- MessageGuard, a system that layers end-to-end encryption in the browser; TrustBase, a system that layers authentication in the operating system; and software-defined perimeter, which layers access control at network middleboxes.
Abstract:  Potentially dangerous cryptography errors are well-documented in many applications. Conventional wisdom suggests that many of these errors are caused by cryptographic Application Programming Interfaces (APIs) that are too complicated, have insecure defaults, or are poorly documented. To address this problem, researchers have created several cryptographic libraries that they claim are more usable, however, none of these libraries have been empirically evaluated for their ability to promote more secure development. This paper is the first to examine both how and why the design and resulting usability of different cryptographic libraries affects the security of code written with them, with the goal of understanding how to build effective future libraries. We conducted a controlled experiment in which 256 Python developers recruited from GitHub attempt common tasks involving symmetric and asymmetric cryptography using one of five different APIs. We examine their resulting code for functional correctness and security, and compare their results to their self-reported sentiment about their assigned library. Our results suggest that while APIs designed for simplicity can provide security benefits - reducing the decision space, as expected, prevents choice of insecure parameters - simplicity is not enough. Poor documentation, missing code examples, and a lack of auxiliary features such as secure key storage, caused even participants assigned to simplified libraries to struggle with both basic functional correctness and security. Surprisingly, the availability of comprehensive documentation and easy-to-use code examples seems to compensate for more complicated APIs in terms of functionally correct results and participant reactions, however, this did not extend to security results. We find it particularly concerning that for about 20% of functionally correct tasks, across libraries, participants believed their code was secure when it was not. Our results suggest that while new cryptographic libraries that want to promote effective security should offer a simple, convenient interface, this is not enough: they should also, and perhaps more importantly, ensure support for a broad range of common tasks and provide accessible documentation with secure, easy-to-use code examples.
Abstract:  Vulnerabilities in Android code—including but not limited to insecure data storage, unprotected inter-component communication, broken TLS implementations, and violations of least privilege—have enabled real-world privacy leaks and motivated research cataloguing their prevalence and impact. Researchers have speculated that appification promotes security problems, as it increasingly allows inexperienced laymen to develop complex and sensitive apps. Anecdotally, Internet resources such as Stack Overflow are blamed for promoting insecure solutions that are naively copy-pasted by inexperienced developers. In this paper, we for the first time systematically analyzed how the use of information resources impacts code security. We first surveyed 295 app developers who have published in the Google Play market concerning how they use resources to solve security-related problems. Based on the survey results, we conducted a lab study with 54 Android developers (students and professionals), in which participants wrote security-and privacy-relevant code under time constraints. The participants were assigned to one of four conditions: free choice of resources, Stack Overflow only, official Android documentation only, or books only. Those participants who were allowed to use only Stack Overflow produced significantly less secure code than those using, the official Android documentation or books, while participants using the official Android documentation produced significantly less functional code than those using Stack Overflow. To assess the quality of Stack Overflow as a resource, we surveyed the 139 threads our participants accessed during the study, finding that only 25% of them were helpful in solving the assigned tasks and only 17% of them contained secure code snippets. In order to obtain ground truth concerning the prevalence of the secure and insecure code our participants wrote in the lab study, we statically analyzed a random sample of 200,000 apps from Google Play, finding that 93.6% of the apps used at least one of the API calls our participants used during our study. We also found that many of the security errors made by our participants also appear in the wild, possibly also originating in the use of Stack Overflow to solve programming problems. Taken together, our results confirm that API documentation is secure but hard to use, while informal documentation such as Stack Overflow is more accessible but often leads to insecurity. Given time constraints and economic pressures, we can expect that Android developers will continue to choose those resources that are easiest to use, therefore, our results firmly establish the need for secure-but-usable documentation.