University of Minnesota
Software Engineering Center
/

You are here

Gregory Gay

Gregory Gay
Student/Research Assistant
Office Location: 
6-248 Keller Hall
Education: 
Ph.D. Computer Science, University of Minnesota, 2015
Advisor: Dr. Mats Heimdahl.
Thesis title: Steering Model-Based Oracles to Admit Real Program Behaviors.

M.S. Computer Science, West Virginia University, 2010.
Advisor: Dr. Tim Menzies.
Thesis title: Robust Optimization of Non-Linear Requirements Models.

B.S. Computer Science, West Virginia University, 2008.
Biography: 

Greg is an assistant professor of Computer Science & Engineering at University of South Carolina. He was previously is a PhD student and research assistant at University of Minnesota under a NSF Graduate Research Fellowship, working with the Critical Systems research group. He received his BS and MS in Computer Science from West Virginia University.

Additionally, Greg has previously interned at NASA's Ames Research Center and Independent Verification & Validation Center, and spent time as a visiting academic at the Chinese Academy of Sciences in Beijing.

Research: 

Greg's research is primarily in the areas of search-based software engineering and automated software testing and analysis, with an emphasis on aspects of the test oracle problem. His current research focus is on construction of effective test oracles for real-time and safety critical systems, including methods of selecting oracle data and making comparisons.

His approach to addressing research problems is based on a data-centric approach, forming an intersection between search, optimization, data mining, and artificial intelligence. He strives to harness the information content of software development artifacts to improve the efficiency and quality of the testing process and to automate tasks in order to lessen the burden on human testers.

His past research has largely focused on the application of search, optimization, and information retrieval techniques to various software engineering tasks, including model optimization, requirements engineering, effort estimation, defect detection, and the traceability between source code and defect reports.

Recent Publications

Measuring the Heterogeneity of Crosscompany Datasets

As a standard practice, general effort estimate models are calibrated from large cross-company datasets. However, many of the records within such datasets are taken from companies that have calibrated the model to match their own local practices. Locally calibrated models are a double-edged sword; they often improve estimate accuracy for that particular organization, but they also encourage the growth of local biases. Such biases remain present when projects from that firm are used in a new cross-company dataset. Over time, such biases compound, and the

Finding Robust Solutions in Requirements Models

Solutions to non-linear requirements engineering problems may be "brittle"; i.e. small changes may dramatically alter solution effectiveness. Hence, it is not enough to just generate solutions to requirements problems- we must also assess solution robustness. The KEYS2 algorithm can generate decision ordering diagrams. Once generated, these diagrams can assess solution robustness in linear time. In experiments with real-world requirements engineering models, we show that KEYS2 can generate decision ordering diagrams in O(N 2).

On the Use of Relevance Feedback in IR-based Concept Location

Concept location is a critical activity during software evolution as it produces the location where a change is to start in response to a modification request, such as, a bug report or a new feature request. Lexical-based concept location techniques rely on matching the text embedded in the source code to queries formulated by the developers. The efficiency of such techniques is strongly dependent on the ability of the developer to write good queries. We propose an approach to augment information retrieval (IR) based concept location via an explicit relevance feedback (RF) mechanism.

Pages