Minimum Energy Consumption for Rate Monotonic Algorithm in a Hard Real-Time Environment

N922A 300 Jay St., Room N922A, Brooklyn, NY, United States

We will discuss the problem of determination of the minimum energy consumption for rate monotonic algorithm in a hard real-time environment. The solution is obtained by Lagrange Multiplier method. Because of its iterative nature, a computer algorithm is developed.

Test Dependencies and the Future of Build Acceleration

N922A 300 Jay St., Room N922A, Brooklyn, NY, United States

With the proliferation of testing culture, many developers are facing new challenges. As projects are getting started, the focus may be on developing enough tests to maintain confidence that the code is correct. However, as developers write more and more tests, performance and repeatability become growing concerns for test suites. In our study of large open source software, we found that running tests took on average 41% of the total time needed to build each project - over 90% in those that took the longest to build. Unfortunately, typical techniques for accelerating test suites from literature (like running only a subset of tests, or running them in parallel) can’t be applied in practice safely, since tests may depend on each other. These dependencies are very hard to find and detect, posing a serious challenge to test and build acceleration. In this talk, I will present my recent research in automatically detecting and isolating these dependencies, enabling for significant, safe and sound build acceleration of up to 16x.

Static Analysis and Verification of C Programs

N922A 300 Jay St., Room N922A, Brooklyn, NY, United States

Static Analysis and Verification of C Programs by Subash Shankar, Hunter College, City University of New York. Recent years have seen the emergence of several static analysis techniques for reasoning about programs. This talk presents several major classes of techniques and tools that implement these techniques. Part of the presentation will be a demonstration of the tools.

How We Use Functional Programming to Find the Bad Guys

N922A 300 Jay St., Room N922A, Brooklyn, NY, United States

In this talk, Richard Minerich will discuss the research activities of Bayard Rock and its approaches to build tools to find the “bad guys”.

Pharmacology Powered by Computational Analysis: Predicting Cardiotoxicity of Chemotherapeutics

N922A 300 Jay St., Room N922A, Brooklyn, NY, United States

Cardiotoxicity is unfortunately a common side effect of many modern chemotherapeutic agents. The mechanisms that underlie these detrimental effects on heart muscle, however, remain unclear. The Drug Toxicity Signature Generation Center at ISMMS aims to address this unresolved issue by providing a bridge between molecular changes in cells and the prediction of pathophysiological effects. I will discuss ongoing work in which we use next-generation sequencing to quantify changes in gene expression that occur in cardiac myocytes after they are treated with potentially toxic chemotherapeutic agents. I will focus in particular on the computational pipeline we are developing that integrates sophisticated sequence alignment, statistical and network analysis, and dynamical mathematical models to develop novel predictions about the mechanisms underlying drug-induced cardiotoxicity.

Two Projects in Text Data Mining and Natural Language Processing

N928 300 Jay St., Room N928, Brooklyn, NY, United States

Two Projects in Text Data Mining and Natural Language Processing ELENA FILATOVA Department of Computer Systems Technology, New York City College of Technology, City University of New York   In this presentation I will describe two projects I am working … Continue reading

Towards Improving Interface Modularity in Legacy Java Software Through Automated Refactoring

N928 300 Jay St., Room N928, Brooklyn, NY, United States

The skeletal implementation pattern is a software design pattern consisting of defining an abstract class that provides a partial interface implementation. However, since Java allows only single class inheritance, if implementers decide to extend a skeletal implementation, they will not be allowed to extend any … Continue reading

Bio-inspired Computation Approach for Tumor Growth with Spatial Randomness Analysis of Kidney Cancer Xenograft Pathology Slides

N928 300 Jay St., Room N928, Brooklyn, NY, United States

In our research, we analyze digitized images of Hematoxylin-Eosin (H&E) slides equipped with tumorous tissues from patient derived xenograft models to build our bio-inspired computation method, namely Personalized Relevance Parameterization of Spatial Randomness (PReP-SR). Applying spatial pattern analysis techniques of quadrat counts, kernel estimation and nearest neighbor functions to the images of the H&E samples, slide-specific features are extracted to examine the hypothesis that existence of dependency of nuclei positions possesses information of individual tumor characteristics. These features are then used as inputs to PReP-SR to compute tumor growth parameters for exponential-linear model. Differential evolution algorithms are developed for tumor growth parameter computations, where a candidate vector in a population consists of size selection indices for spatial evaluation and weight coefficients for spatial features and their correlations. Using leave-one-out-cross-validation method, we showed that, for a set of H&E slides from kidney cancer patient derived xenograft models, PReP-SR generates personalized model parameters with an average error rate of 13:58%. The promising results indicate that bio-inspired computation techniques may be useful to construct mathematical models with patient specific growth parameters in clinical systems.

Ontology-based Classification and Faceted Search Interface for APIs

N928 300 Jay St., Room N928, Brooklyn, NY, United States

This work introduces faceted service discovery. It uses the Programmable Web directory as its corpus of APIs and enhances the search to enable faceted search, given an OWL ontology. The ontology describes semantic features of the APIs. We have designed the API classification ontology using LexOnt, a software we have built for semi-automatic ontology creation tool. LexOnt is geared toward non-experts within a service domain who want to create a high-level ontology that describes the domain. Using well- known NLP algorithms, LexOnt generates a list of top terms and phrases from the Programmable Web corpus to enable users to find high-level features that distinguish one Programmable Web service category from another. To also aid non-experts, LexOnt relies on outside sources such as Wikipedia and Wordnet to help the user identify the important terms within a service category. Using the ontology created from LexOnt, we have created APIBrowse, a faceted search interface for APIs. The ontology, in combination with the use of the Apache Solr search platform, is used to generate a faceted search interface for APIs based on their distinguishing features. With this ontology, an API is classified and displayed underneath multiple categories and displayed within the APIBrowse interface. APIBrowse gives programmers the ability to search for APIs based on their semantic features and keywords and presents them with a filtered and more accurate set of search results.

Eager Execution in TensorFlow

N923

Alexandre Passos Software Engineer, Google             In this talk we'll go over TensorFlow, an open- source cross-platform machine learning library developed by Google, and explore its new feature: eager execution. We'll go over how to … Continue reading