- DDoS: a 20-year journey from compromised workstations to IoT attacks
- Building noise robust machine listeners with data and inspiration from humans
- Driving Enterprise Transformation with Virtual & Augmented Reality
- Eager Execution in TensorFlow
- Applied AI Techniques
- Energy Management as a Service (EmaaS): Design, Analysis and Realization
- Slides for talk on Ontology-based Classification and Faceted Search Interface for APIs
- Ontology-based Classification and Faceted Search Interface for APIs
- NYC, Media and Technology: What’s Hot
- Bio-inspired Computation Approach for Tumor Growth with Spatial Randomness Analysis of Kidney Cancer Xenograft Pathology Slides
Category Archives: Research
Slides for “Pharmacology Powered by Computational Analysis: Predicting Cardiotoxicity of Chemotherapeutics” by Jaehee Shim now available
OCTOBER 22 @ 12:00 PM – 1:00 PM, Room N922A
Cardiotoxicity is unfortunately a common side effect of many modern chemotherapeutic agents. The mechanisms that underlie these detrimental effects on heart muscle, however, remain unclear. The Drug Toxicity Signature Generation Center at ISMMS aims to address this unresolved issue by providing a bridge between molecular changes in cells and the prediction of pathophysiological effects. I will discuss ongoing work in which we use next-generation sequencing to quantify changes in gene expression that occur in cardiac myocytes after they are treated with potentially toxic chemotherapeutic agents. I will focus in particular on the computational pipeline we are developing that integrates sophisticated sequence alignment, statistical and network analysis, and dynamical mathematical models to develop novel predictions about the mechanisms underlying drug-induced cardiotoxicity.
Jaehee Shim is a Ph.D candidate in the Biophysics and Systems Pharmacology Program at Icahn School of Medicine at Mount Sinai (ISMMS). As a part of her Ph.D. studies, she is building dynamical prediction models based on analysis of gene expression data generated by the Drug Toxicity Signature Generation Center at ISMMS. She received her B.S in Biochemistry from the University of Michigan-Dearborn. Prior to starting her Ph.D, Jaehee worked at the ISMMS Genomics Core with a team of senior scientists and gained experience in improving and troubleshooting RNA sequencing protocols using Next Generation Sequencing Platforms.
Slides for the “Static Analysis and Verification of C Programs” talk are now available on SlideShare.
Director of Research, Bayard Rock, LLC
OCTOBER 1 @ 12:00 PM – 1:00 PM
300 Jay St., Room N922A, Brooklyn, NY 11201
Traditional approaches in anti-money laundering involve simple matching algorithms and a lot of human review. However, in recent years this approach has proven to not scale well with the ever increasingly strict regulatory environment. We at Bayard Rock have had much success at applying fancier approaches, including some machine learning, to this problem. In this talk I walk you through the general problem domain and talk about some of the algorithms we use. I’ll also dip into why and how we leverage typed functional programming for rapid iteration with a small team in order to out-innovate our competitors.
Bayard Rock, LLC, is a private research and software development company with headquarters in the Empire State Building. It is a leader in the filed in the research and development of tools for improving the state of the art in anti-money laundering and fraud detection. As you might imagine, these tools rely heavily on mathematics and graph algorithms. In this talk, Richard Minerich will discuss the research activities of Bayard Rock and its approaches to build tools to find the “bad guys”. Richard Minerich is Bayard Rock’s Director of Research and Development. Rick has expertise in F#, C#, C, C++, C++/CLI,. NET (1.1, 2.0, 3.0, 3.5, 4.0, and 4.5), Object Oriented Design, Functional Design, Entity Resolution, Machine Learning, Concurrency, and Image Processing. He is interested in working on algorithmically, mathematically complex projects and remains open to explore new ideas.
Rick holds 2 patents. The first one, co-invented with a colleague, is titled “Method of Image Analysis Using Sparse Hough Transform.” The other independently held is known as “Method for Document to Template Alignment.”
Light refreshments will be served.
Static Analysis and Verification of C Programs
SEPTEMBER 17 @ 12:00 PM – 1:00 PM
Recent years have seen the emergence of several static analysis techniques for reasoning about programs. This talk presents several major classes of techniques and tools that implement these techniques. Part of the presentation will be a demonstration of the tools.
Dr. Subash Shankar is an Associate Professor in the Computer Science department at Hunter College, CUNY. Prior to joining CUNY, he received a PhD from the University of Minnesota and was a postdoctoral fellow in the model checking group at Carnegie Mellon University. Dr. Shankar also has over 10 years of industrial experience, mostly in the areas of formal methods and tools for analyzing hardware and software systems.
Slides for the “Test Dependencies and the Future of Build Acceleration” talk are now available on SlideShare.
Test Dependencies and the Future of Build Acceleration
SEPTEMBER 10 @ 12:00 PM – 1:00 PM
With the proliferation of testing culture, many developers are facing new challenges. As projects are getting started, the focus may be on developing enough tests to maintain confidence that the code is correct. However, as developers write more and more tests, performance and repeatability become growing concerns for test suites. In our study of large open source software, we found that running tests took on average 41% of the total time needed to build each project – over 90% in those that took the longest to build. Unfortunately, typical techniques for accelerating test suites from literature (like running only a subset of tests, or running them in parallel) can’t be applied in practice safely, since tests may depend on each other. These dependencies are very hard to find and detect, posing a serious challenge to test and build acceleration. In this talk, I will present my recent research in automatically detecting and isolating these dependencies, enabling for significant, safe and sound build acceleration of up to 16x.
Jon is a fourth year PhD candidate at Columbia University studying Software Engineering with Prof Gail Kaiser. His research interests in Software Engineering mostly fall under the umbrella of Software Testing and Program Analysis. Jon’s recent research in accelerating software testing has been recognized with an ACM SIGSOFT Distinguished Paper Award (ICSE ’14), and has been the basis for an industrial collaboration with the bay-area software build acceleration company Electric Cloud. Jon actively participates in the artifact evaluation program committees of ISSTA and OOPSLA, and has served several years as the Student Volunteer chair for OOPSLA.
In case you missed it, we now have the audio available for the talk on Big Data Challenges and Solutions from last Spring semester.
“Minimum Energy Consumption for Rate Monotonic Algorithm in a Hard Real-Time Environment” by Tin Yau Tam
Title: Minimum Energy Consumption for Rate Monotonic Algorithm in a Hard Real-Time Environment.
In cooperation with the Noyce Summer Camp
Date: June 8, 2015, 2 PM to 3 PM
Location: NAM 922A
Speaker: Prof. Tin Yau Tam (Chair of Mathematics Department at Auburn University)
Abstract: We will discuss the problem of determination of the minimum energy consumption for rate monotonic algorithm in a hard real-time environment. The solution is obtained by Lagrange Multiplier method. Because of its iterative nature, a computer algorithm is developed.
Computer Systems Technology Colloquium Series presents:
Big Data Challenges and Solutions
Computer Systems Technology
New York City College of Technology
Thursday, April 16, 2015 12-1pm
Light refreshments will be served!
Big data is set to offer tremendous insight. But with terabytes and petabytes of data pouring in to organizations today, traditional architectures and infrastructures are not up to the challenge. This begs the question: How do you present big data in a way that can be quickly understood and used? These data present tremendous opportunities in data mining, a burgeoning field in computer science that focuses on the development of methods that can extract knowledge from data. In many real world problems, data mining algorithms have access to massive amounts of data. Mining all the available data is prohibitive due to computational (time and memory) constraints. Much of the current research is concerned with scaling up data mining algorithms (i.e. improving on existing data mining algorithms for larger datasets). An alternative approach is to scale down the data. Thus, determining a smallest sufficient training set size that obtains the same accuracy as the entire available dataset remains an important research question. Our research focuses on selecting how many (sampling) instances to present to the data mining algorithm and also how to improve the quality of the data.
Dr. Ashwin Satyanarayana is an Assistant Professor in the Computer Systems Technology department at CityTech. Prior to joining CityTech, Ashwin was a Research Scientist at Microsoft, where he worked on several Big Data problems including Query Reformulation on Microsoft’s search engine Bing. Ashwin’s prior experience also includes a Senior Research Scientist on the area of Location Analytics at Placed Inc. He holds a PhD in Computer Science (Data Mining) from SUNY, with particular emphasis on Data Mining, Machine Learning and Applied Probability with applications in Real World Learning Problems.