SaTC: CORE: Small: Auto-active Hyperproperty Verification for Security

Project: Research project

Project Details

Description

In principle the highest assurance of trustworthiness for software systems is achieved by mathematical proofs of correctness with respect to precise specifications of requirements. Specifications describe the structure and intended behavior of software based on mathematical models of hardware, software, and the operating environment. The environment may include active adversaries trying steal or tamper with sensitive information. Creating specifications and proofs is difficult and is seldom done in current practice, but some companies have begun to adopt so-called auto-active verification tools. An auto-active verification tool enables the software engineer to produce specifications and proofs by annotating code with special comments called assertions. Assertions are automatically checked, and the tool is integrated with commonplace development tools and workflows. This approach works well for requirements that can be expressed in terms of individual executions of a program. However, confidentiality and privacy requirements must be specified as relationships between multiple executions. Relationships are also needed to verify equivalence between different versions of a program. Such relational requirements are called hyperproperties. In this project, researchers are developing theory and algorithms to extend the reach of auto-active verification to include hyperproperties. The research has the potential to transform computing practice by supporting accountability of system designers and builders, and trustworthiness of software supply chains, through evidence that includes mathematically precise specifications and machine-checked proofs of security and privacy. Usable equivalence verification can lessen the risk that code improvements and bug fixes introduce new vulnerabilities. The project plan is organized in three thrusts. One thrust is advancing the science of security by developing theory in the form of deductive logics together with novel algebras to support automated reasoning by alignment of intermediate steps in relationships between executions. Deductive rules are connected with annotation-oriented verification conditions and also with code-independent confidentiality and integrity properties specified with respect to a model of adversary capabilities. The second thrust is developing algorithms for automated checking of annotations and search for alignments that facilitate annotation. Existing prototype tools are being extended with these algorithms. Trustworthiness of the tools is assured by the theory developed in the first thrust, which is being machine-checked using an interactive proof assistant. In the third thrust, experiments and case studies using the prototypes serve to evaluate usability and effectiveness of verification for requirements including secure information flow, program refinement, and program equivalence. Research products including training materials and benchmarks are being made freely available.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
StatusActive
Effective start/end date1/10/2430/09/27

Funding

  • National Science Foundation

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.