Deterministic Time-Space Tradeoffs for k-SUM
Andrea Lincoln | Stanford
Given a set of numbers, the k-SUM problem asks for a subset of k numbers that sums to zero. When the numbers are integers, the time and space complexity of k-SUM is generally studied in the word-RAM model; when the numbers are reals, the complexity is studied in the real-RAM model, and space is measured by the number of reals held in memory at any point. We present a time and space efficient deterministic self-reduction for the k-SUM problem which holds for both models, and has many interesting consequences.
– We create an algorithm which takes improvements in log factors for 3-SUM algorithms, and creates algorithms that run as fast in small space.
– We show that a more plausible conjecture, the Small Space 3-SUM Conjecture, is equivalent to the very popular 3-SUM Conjecture.
Not Just a Black Box: Interpretable Deep Learning for Genomics
Avanti Shrikumar | Stanford
Convolutional Neural Networks (CNNs) have emerged as a state-of-the-art technique in a variety of machine learning applications, ranging from computer vision to regulatory genomics. However, methods for interpreting these models leave much room for improvement. Patterns found by individual filters in a CNN are often visualized, but this approach fails to account for the inherently distributed representations learned by CNNs such that multiple filters often cooperate to describe a single pattern. A variety of techniques exist to generate saliency maps for individual input examples, but these techniques do not give the user a consolidated view of the recurring patterns across all input examples, and often come with their own individual drawbacks such as being computationally expensive or susceptible to artifacts. Here, we present DeepLIFT (Deep Linear Importance Feature Tracker), a family of novel techniques that not only produces saliency maps without the drawbacks of other approaches, but which, to our knowledge, is the only approach that can integrate the combined effects of multiple cooperating filters to identify common patterns in the input data. We demonstrate the successful application of DeepLIFT to interpreting deep learning models for diverse learning tasks in regulatory genomics. DeepLIFT not only produces much cleaner and broader patterns than alternative methods, but can detect dependencies between positions in the input image and produce novel insights into the underlying heterogeneity of regulatory DNA. Our framework provides the first comprehensive interpretation engine for deep neural networks in genomics. Although we have specifically studied the application to genomics, the techniques we have pioneered easily generalize to other domains.
Clockwork Convnets for Video Semantic Segmentation
Kate Rakelly | Berkeley
Video semantic segmentation is an understudied yet increasingly important task, with applications to self-navigating vehicles, robotics, and wearable computing. Recent years have seen tremendous progress in still-image segmentation; however the naive application of these state-of-the-art algorithms to every video frame requires considerable computation and ignores the temporal continuity inherent in video. We propose a video recognition framework that relies on two key observations: 1) while pixels may change rapidly from frame to frame, the semantic content of a scene evolves more slowly, and 2) execution can be viewed as an aspect of architecture, yielding purpose-fit computation schedules for networks. We define a novel family of “clockwork” convnets driven by fixed or adaptive clock signals that schedule the processing of different layers at different update rates according to their semantic stability. We design a pipeline schedule to reduce latency for real-time recognition and a fixed-rate schedule to reduce overall computation. Finally, we extend clockwork scheduling to adaptive video processing by incorporating data-driven clocks that can be tuned on unlabeled video. The accuracy and efficiency of clockwork convnets are demonstrated on the Youtube-Objects and NYUD video datasets.
Modeling the Dynamics of Human Decision Making
Hima Lakkaraju | Stanford
Each year judges, doctors and several other decision makers all over the world make millions of critical decisions. Gaining insights into such decision making processes helps us in assessing the quality of individual decisions as well as enhancing the ability to make optimal decisions. In this talk, I will present my research which involves modeling such decision making with the following set of goals: 1) inferring optimal decision strategies 2) obtaining diagnostic insights into patterns of mistakes made by decision makers 3) characterizing how various decision making styles change over time.
Towards an Intelligent Power Grid for a Sustainable Future
Mahnoosh Alizadeh | Stanford
A key challenge for modernizing how we consume resources provided by our societal-scale infrastructure is in capturing the close interplay of three major elements that affect their operations: physical control mechanisms, information technology, and economic and social aspects introduced by humans in the loop. My research examines this important challenge in the context of electricity demand side management (DSM) programs in smart power grids. DSM schemes offer electricity end-users the ability to have a more elastic consumption behavior. This would help increase market efficiency and margins of safety in power systems, particularly under high levels of renewable energy integration. In particular, in this talk, I will discuss how Electric Vehicles (EV) are emerging as one of the primary solutions to make electricity demand elastic. I show that designing a reliable EV demand management scheme requires considering the fact that EV demand can not only be shifted temporally but also geographically. Every day, each EV owner needs to jointly decide on a travel plan (including path) and charging locations and associated charge amounts. At the system level, this joint planning for charge and path will introduce a connection between intelligent power and transportation systems. In fact, we have shown that ignoring the interdependence could result in potential instabilities in power grid operations. We provide insights as to how we can design correct prices of electricity that consider this interconnection. Such prices can move EV owners that make selfish decisions about path and charge towards a socially optimal traffic pattern and energy footprint.
Tackling Computationally Expensive Optimization Problems with Surrogate Models
Juliane Mueller | Berkeley
Optimization problems are encountered in many application areas, for example, in environmental engineering, climate modeling, combustion, and materials science. The objective and constraint function evaluations are often based on running computationally expensive simulation models that may take hours to complete. Analytic descriptions of the objective and constraint functions are not available (black-box) and neither are derivatives. Therefore, in an optimization setting, the goal is to find near-optimal solutions within only very few expensive evaluations in order to keep the optimization time acceptable. To this end, we use surrogate models to cheaply approximate the expensive functions and guide the search for improved solutions. In this talk, we will give a brief overview of surrogate model algorithms and problems classes that have previously been successfully addressed with the surrogate model approach.
A Variational Perspective on Accelerated Methods in Optimization
Ashia Wilson | Berkeley
Accelerated gradient methods play a central role in optimization, achieving optimal rates in many settings. While many generalizations and extensions of Nesterov’s original acceleration method have been proposed, it is not yet clear what is the natural scope of the acceleration concept. In this paper, we study accelerated methods from a continuous-time perspective. We show that there is a Lagrangian functional that we call the Bregman Lagrangian which generates a large class of accelerated methods in continuous time, including (but not limited to) accelerated gradient descent, its non-Euclidean extension, and accelerated higher-order gradient methods. We show that the continuous-time limit of all of these methods correspond to traveling the same curve in spacetime at different speeds. From this perspective, Nesterov’s technique and many of its generalizations can be viewed as a systematic way to go from the continuous-time curves generated by the Bregman Lagrangian to a family of discrete-time accelerated algorithms.
Designing an Accelerometer for Extreme Environments
Karen Dowling | Stanford
Micro-electro-mechanical systems (MEMS) have been a growing market in the last few decades – in particular accelerometers and gyroscopes. The emergence of Internet of Things opens the needed for new sensors that can handle new uses in harsh environments – robots and vehicles on volcanos, navigation in the deep ocean, and even for landers on other planets and moons require robust sensors for these unique situations. This talk will walk through the requirements of an accelerometer to operate in some of these environments, and possible strategies (materials, transduction platforms, etc) to take moving forward for my thesis.
Pixel Size Receiver for Medical Ultrasound
Man-Chia Chen | Stanford
Medical ultrasound has been one of the most popular diagnostic imaging modality because it is non-invasive and low cost. Most of the ultrasound systems nowadays only support 2D images. In recent years, people are more and more interested in building a 3D ultrasound system to obtain volumetric information of the objects. Conventional 2D systems separate the hardware in two parts. Most of the processing, including digitization, beamforming and image reconstruction, is performed in the back-end console station while only the transducers are integrated on the probe handles. A 3D system has difficulty adapting to the same conventional architecture, because interconnects between the transducer and back-end console station become the bottleneck as the number of channels increases. Advanced assembly techniques, such as direct flip-chip bonding between the transducer arrays and ASICs, has been demonstrated in the literatures as an effective alternative to the cable connections. On the other hand, sub-array beamformer has been proposed to reduce the channel count of the cable by combining signals of adjacent channels inside the probe. A single-chip 64 channel subarray beamformer has been shown in recent publication. However, it uses large silicon area and thus is challenging to connect to the transducer array. Therefore, my research focuses on the design of pixel size electronics, which integrate the subarray beamformer in a sensor pixel size to maintain the capability of direct flip-chip bonding to a 2D transducer array. To order to achieve high integration density, we use inverter-based amplifier and non-traditional beamforming archietecture to simplify the building blocks. A proof-of-concept prototype IC has been implemented in 28nm processs.
Flash Storage Disaggregation
Ana Kilmovic | Stanford
PCIe-based Flash is commonly deployed to provide datacenter applications with high IO rates. However, its capacity and bandwidth are often underutilized as it is difficult to design servers with the right balance of CPU, memory and Flash resources over time and for multiple applications. We examine Flash disaggregation as a way to deal with Flash overprovisioning. We tune remote access to Flash over commodity networks and analyze its impact on workloads sampled from real datacenter applications. We show that, while remote Flash access introduces a 20% throughput drop at the application level, disaggregation allows us to make up for these overheads through resource-efficient scale-out. Hence, we show that Flash disaggregation allows scaling CPU and Flash resources independently in a cost effective manner.
A Deep Learning Model for Predicting and Understanding CTCF Binding
Irene Kaplow | Berkeley
CTCF is involved in holding DNA loops together, thereby playing a key role in determining how genes are regulated. Thus, we can study the mechanism behind gene expression variation across cell types by understanding how CTCF binding varies. However, our current understanding of the relationship between DNA sequence and CTCF binding is incomplete. We have therefore developed a deep neural networks approach to predict CTCF binding and show that our approach provides accurate predictions. We also find that a model trained in one cell type can predict binding in other cell types, which suggest that CTCF binding is not dependent on the presence of cell-type-specific co-factors. In addition, we find that the most important sequence patterns in predicting CTCF binding are the CTCF motif and its reverse complement, suggesting that CTCF usually binds directly to DNA.
Interaction Design for Autism Glass
Catherine Xu | Stanford
Gold Nanoparticle-Polymer Composite Materials with Improved Dielectric Properties
Anju Toor | Berkeley
A substantial amount of work has been carried out in the area of dielectric nanocomposite materials. However, reported nanocomposite materials generally have disadvantages such as high dielectric loss mainly due to the nanoparticle agglomeration. Here, we present a novel nanocomposite material having polyvinylpyrrolidone (PVP) functionalized gold nanoparticles (5 nm in diameter) embedded inside a polyvinylidene fluoride (PVDF) host polymer. A homogeneous dispersion of gold nanoparticles with low particle agglomeration has been achieved, up to 15 wt % of nanoparticles. Fabricated nanoparticle polymer composite films showed the absence of voids and cracks. Also, enhanced dielectric permittivities with low dielectric loss values are observed.
Generating Visual Explanations
Lisa Anne Hendricks | Berkeley
Clearly explaining a rationale for a classification decision to an end-user can be as important as the decision itself. Existing approaches for deep visual recognition are generally opaque and do not output any justification text; contemporary vision-language models can describe image content but fail to take into account class-discriminative image aspects which justify visual predictions. We propose a new model that focuses on the discriminating properties of the visible object, jointly predicts a class label, and explains why the predicted label is appropriate for the image. We propose a novel loss function based on sampling and reinforcement learning that learns to generate sentences that realize a global sentence property, such as class specificity. Our results on a fine-grained bird species classification dataset show that our model is able to generate explanations which are not only consistent with an image but also more discriminative than descriptions produced by existing captioning methods.
RoughCut: A Transcript-Based Video Editing Tool
Mackenzie Leake | Stanford
Have you ever wanted to edit a video but found the process was harder and more time consuming than you thought it should be? RoughCut is a system that aims to make this process of editing videos easier. The system aligns text transcripts to the audio and video inputs and then uses the text to guide the editing process. Utilizing techniques from computer vision, sentiment analysis, and graphical models, the system automatically edits together raw video clips into a sequence.
Learning Visual Objectives for Robotic Manipulation
Chelsea Finn | Berkeley
Reinforcement learning can enable agents to acquire complex behaviors from high-level specifications. However, defining a objective function that can be optimized effectively and encodes the correct task is challenging in practice. We explore how inverse optimal control (IOC) can be used to learn behaviors from demonstrations, with applications to robotic manipulation. Our method addresses two key challenges in inverse optimal control: first, the need for informative features and effective regularization to impose structure on the cost, and second, the difficulty of learning the cost function under unknown dynamics for high-dimensional continuous systems. To address the former challenge, we present an algorithm capable of learning arbitrary nonlinear cost functions, such as neural networks, without meticulous feature engineering. To address the latter challenge, we formulate an efficient sample-based approximation for Maximum Entropy IOC. We evaluate our method on a series of simulated tasks and real-world robotic manipulation problems, demonstrating substantial improvement over prior methods both in terms of task complexity and sample efficiency.