Tag Archives: program analysis

Slides for the “Static Analysis and Verification of C Programs” Talk

Slides for the “Static Analysis and Verification of C Programs” talkĀ are now available on SlideShare.

Static Analysis and Verification of C Programs

Static Analysis and Verification of C Programs

SEPTEMBER 17 @ 12:00 PM1:00 PM

SUBASH SHANKAR

DepartmentĀ of Computer Science, Hunter College, City University of New York

Recent years have seen the emergence of several static analysis techniques for reasoning about programs. This talk presents several major classes of techniques and tools that implement these techniques. Part of the presentation will be a demonstration of the tools.

Dr. Subash Shankar is an Associate Professor in the Computer Science department at Hunter College, CUNY. Prior to joining CUNY, he received a PhD from the University of Minnesota and was a postdoctoral fellow in the model checking group at Carnegie Mellon University. Dr. Shankar also has over 10 years of industrial experience, mostly in the areas of formal methods and tools for analyzing hardware and software systems.

DETAILS

Date:
September 17
Time:
12:00 pm – 1:00 pm
Event Category:
Event Tags:
, , ,, ,,

VENUE

N922A
300 Jay St., Room N922A, Brooklyn, NY11201 United States

+ Google Map

Phone:
(718) 260-5500
Website:
http://www.citytech.cuny.edu/

ORGANIZER

Computer Systems Technology Colloquium Series
Phone:
(718) 260-5170
Email:
Website:
https://openlab.citytech.cuny.edu/cstcolloquium

SUBSCRIBE


 

FEEDBACK

Slides for the “Test Dependencies and the Future of Build Acceleration” Talk

Slides for the “Test Dependencies and the Future of Build Acceleration” talkĀ are now available on SlideShare.

Test Dependencies and the Future of Build Acceleration

Test Dependencies and the Future of Build Acceleration

JONATHAN BELL

Programming Systems Laboratory, Department of Computer Science, Columbia University

SEPTEMBER 10 @ 12:00 PM1:00 PM

With the proliferation of testing culture, many developers are facing new challenges. As projects are getting started, the focus may be on developing enough tests to maintain confidence that the code is correct. However, as developers write more and more tests, performance and repeatability become growing concerns for test suites. In our study of large open source software, we found that running tests took on average 41% of the total time needed to build each project ā€“ over 90% in those that took the longest to build. Unfortunately, typical techniques for accelerating test suites from literature (like running only a subset of tests, or running them in parallel) canā€™t be applied in practice safely, since tests may depend on each other. These dependencies are very hard to find and detect, posing a serious challenge to test and build acceleration. In this talk, I will present my recent research in automatically detecting and isolating these dependencies, enabling for significant, safe and sound build acceleration of up to 16x.

Jon is a fourth year PhD candidate at Columbia University studying Software Engineering with Prof Gail Kaiser. His research interests in Software Engineering mostly fall under the umbrella of Software Testing and Program Analysis. Jonā€™s recent research in accelerating software testing has been recognized with an ACM SIGSOFT Distinguished Paper Award (ICSE ā€™14), and has been the basis for an industrial collaboration with the bay-area software build acceleration company Electric Cloud. Jon actively participates in the artifact evaluation program committees of ISSTA and OOPSLA, and has served several years as the Student Volunteer chair for OOPSLA.

Test Dependencies and the Future of Build Acceleration

Test Dependencies and the Future of Build Acceleration

JONATHAN BELL

Programming Systems Laboratory, Department of Computer Science, Columbia University

With the proliferation of testing culture, many developers are facing new challenges. As projects are getting started, the focus may be on developing enough tests to maintain confidence that the code is correct. However, as developers write more and more tests, performance and repeatability become growing concerns for test suites. In our study of large open source software, we found that running tests took on average 41% of the total time needed to build each project – over 90% in those that took the longest to build. Unfortunately, typical techniques for accelerating test suites from literature (like running only a subset of tests, or running them in parallel) canā€™t be applied in practice safely, since tests may depend on each other. These dependencies are very hard to find and detect, posing a serious challenge to test and build acceleration. In this talk, I will present my recent research in automatically detecting and isolating these dependencies, enabling for significant, safe and sound build acceleration of up to 16x.

Jon is a fourth year PhD candidate at Columbia University studying Software Engineering with Prof Gail Kaiser. His research interests in Software Engineering mostly fall under the umbrella of Software Testing and Program Analysis. Jon’s recent research in accelerating software testing has been recognized with an ACM SIGSOFT Distinguished Paper Award (ICSE ’14), and has been the basis for an industrial collaboration with the bay-area software build acceleration company Electric Cloud. Jon actively participates in the artifact evaluation program committees of ISSTA and OOPSLA, and has served several years as the Student Volunteer chair for OOPSLA.


 

Static Analysis and Verification of C Programs

Static Analysis and Verification of C Programs

Subash Shankar

DepartmentĀ of Computer Science, Hunter College, City University of New York

Recent years have seen the emergence of several static analysis techniques for reasoning about programs. This talk presents several major classes of techniques and tools that implement these techniques. Part of the presentation will be a demonstration of the tools.

Dr. Subash Shankar is an Associate Professor in the Computer Science department at Hunter College, CUNY. Prior to joining CUNY, he received a PhD from the University of Minnesota and was a postdoctoral fellow in the model checking group at Carnegie Mellon University. Dr. Shankar also has over 10 years of industrial experience, mostly in the areas of formal methods and tools for analyzing hardware and software systems.

Subscribe


 

Feedback