UCSB Computer Science Department

UCSB Computer Science Department Official page for the Computer Science Department at UC Santa Barbara | Follow for updates!

The Computer Science Department at the University of California, Santa Barbara welcomes you.

Retrieval is a key component in enhancing Large Language Models (LLMs) with external information, improving accuracy, re...
09/04/2025

Retrieval is a key component in enhancing Large Language Models (LLMs) with external information, improving accuracy, recency, and contextual richness. Yet many pipelines use the same retrieval and reranking strategy for every query, wasting computation on simple cases, introducing irrelevant content, and struggling with reasoning-intensive tasks. Recent work on efficient and adaptive retrievals dynamically adjusts how much computation is used, where it is applied, and how information is organized by deciding when and how to retrieve, tuning granularity, reusing caches, constructing contexts selectively, and focusing ranking on the most promising candidates, ultimately aligning retrieval with task complexity to enable scalable systems that handle diverse and complex information needs more effectively. My talk will provide an overview of the above work and my research direction.

https://ucsb.zoom.us/my/gyuwankim

To achieve low latency sensing, actuation, and control, applications are increasingly embedded in the world around us, i...
09/04/2025

To achieve low latency sensing, actuation, and control, applications are increasingly embedded in the world around us, i.e. at the edge of the network. These applications provide automation, autonomy, situational awareness, and data-driven intelligence for local operations. Example applications include smart systems for agriculture, wildlife conservation, and physical infrastructure. These applications perform a wide range of communication and computation in technologically hostile environments using heterogeneous devices with strict power and network limitations. To efficiently compute over these distributed systems, we require robust application deployment and scheduling techniques that are specialized for these challenging settings. In this talk, I will overview the different methodologies available in the research space, what we've done to improve scheduling for the edge and propose new directions for this research.

https://tinyurl.com/yhuyuynf

Digital agriculture is the use of technology and advanced analytics to provide decision support and automation for farm ...
09/04/2025

Digital agriculture is the use of technology and advanced analytics to provide decision support and automation for farm operations. These advances enable farmers to reduce their costs and operational complexity, while enhancing farm productivity and sustainability. In this MAE, we investigate the use of Computational Fluid Dynamics (CFD) for digital agriculture applications. CFD modeling is a cost effective way to explore and estimate complex environmental and operating conditions such as those found on farms. Unfortunately, because of the computational complexity of CFD modeling, simulations can be time consuming to perform and thus difficult to use in real-time decision making (e.g. for irrigation control, frost protection, and spray applications). Therefore, we also explore alternative approaches that leverage machine learning (ML) and that optimize computational efficiency to reduce the overhead of using CFD ``in the loop''. In addition, to enable development of end-to-end applications on-farm, we investigate Internet of Things (IoT) and edge computing systems that leverage modeling and data-driven analytics to enable decision support and intelligent automation for agricultural settings.

https://ucsb.zoom.us/j/84591822970

The Ethereum blockchain and its decentralized finance (DeFi) ecosystem have fundamentally transformed financial infrastr...
09/04/2025

The Ethereum blockchain and its decentralized finance (DeFi) ecosystem have fundamentally transformed financial infrastructures, with over 100 billion USD in total value locked across thousands of interconnected protocols. With the growth of DeFi, the interactions between smart contracts have become increasingly complex, enabling advanced financial protocols like lending platforms and automated market makers. Nonetheless, bugs in smart contract interactions are a common cause of critical vulnerabilities: many services interact with contracts that must be trusted to manage digital assets, creating a web of dependencies where a single vulnerability can cascade across multiple protocols. As a result, hundreds of millions of dollars are stolen every year through exploits that target the subtle semantics of inter-contract communication.

The core security challenge lies not in simple coding errors, which existing tools readily detect, but in these complex multi-contract interactions. In this talk, I will address this fundamental gap by introducing two novel analysis techniques that systematically model, identify, and exploit multi-contract vulnerabilities at scale, culminating in GREED -- a versatile symbolic ex*****on framework that empowers security researchers to rapidly prototype new analyses. Through automated discovery and synthesis of proof-of-concept exploits across millions of deployed contracts, my work demonstrates that inter-contract vulnerabilities represent a systemic threat to blockchain security, and provides the tools and methodologies necessary to detect and prevent these attacks before they result in financial losses.

Software bugs continue to pose significant challenges to modern society, causing considerable economic impact, and, in t...
08/26/2025

Software bugs continue to pose significant challenges to modern society, causing considerable economic impact, and, in the worst case, leading to catastrophic physical consequences. When bugs evolve into security vulnerabilities, the risk of intentional exploitation carried out by malicious actors escalates, potentially creating severe consequences for human rights and national security. Thus, identifying and addressing the root causes of software vulnerabilities (at scale) became crucial. However, automated vulnerability identification is an inherently complex task. First, the diversity and complexity of modern software systems require an understanding of many domain-specific details, making it impossible to create a one-size-fits-all solution. Secondly, automated security analyses need to strike an optimal balance between precision and efficiency: catching as many instances of a class of vulnerability as possible, while reducing false positives.

This talk provides insights into the evolution of program analysis techniques, particularly focusing on Domain-Driven Automated Security Analyses (DDASA). In particular, the goal of a DDASA is to first design custom “oracles” to detect classes of domain-specific vulnerabilities, and then, leverage a combination of static and dynamic analyses to identify such weaknesses. During this presentation, I will discuss my approach to designing practical, domain-specific security analyses for the identification of vulnerabilities in complex software systems (such as firmware and DeFi applications) and demonstrate their effectiveness on real-world targets.

https://ucsb.zoom.us/j/5604068241

Visual prostheses ("bionic eyes") aim to restore sight by electrically stimulating the retina or cortex, but current sys...
08/26/2025

Visual prostheses ("bionic eyes") aim to restore sight by electrically stimulating the retina or cortex, but current systems lack the intelligence to deliver consistent, high-quality percepts. This dissertation contributes to the development of a ‘Smart Bionic Eye’—a model-informed and user-adaptive vision restoration system—by introducing a computational framework that integrates deep learning, perceptual modeling, and human-in-the-loop optimization. The work begins with a data-driven model of phosphene appearance that predicts how perceptual features such as brightness, size, and shape vary with stimulus parameters. Trained on data spanning years of psychophysics and neuroanatomy, the model generalizes across electrodes and stimulation conditions and serves as the foundation for informed stimulus design. To solve the inverse problem of generating the electrical stimulus for a target percept, a deep neural network encoder is trained to invert the perceptual model. This encoder enables end-to-end optimization and consistently outperforms standard stimulation strategies in simulated users. To handle user variability and perceptual drift over time, the framework incorporates human-in-the- loop optimization using preferential Bayesian methods. This approach adapts stimulation strategies based on real-time user feedback and quickly converges to personalized solutions. Studies with sighted participants viewing simulated prosthetic vision demonstrate the method’s effectiveness and robustness to noise and model mismatch. Finally, the framework is extended to the visual cortex. Using neural recordings from a blind participant implanted with a 96-channel Utah array, a deep model is trained to predict single-trial neural responses and synthesize stimulation patterns that evoke targeted activity. Both inverse networks and gradient-based controllers outperform conventional techniques and better modulate evoked activity. Together, these contributions establish a scalable framework for intelligent visual prostheses that adapt to individual users and bring the Smart Bionic Eye closer to clinical reality.

https://ucsb.zoom.us/j/87572344837

Recent advances in diffusion-based image generation have enabled more diverse, high-quality image generation, opening ne...
08/26/2025

Recent advances in diffusion-based image generation have enabled more diverse, high-quality image generation, opening new possibilities in game development, filmmaking, and advertising. However, these tasks often require precise control over the generation process to meet specific artistic, narrative, or branding goals. This demands conditioning inputs such as text instructions, reference images, or visual attributes, which require training data that accurately reflect image-condition associations. Existing training data creation approaches, including manual annotation, data re-purposing, and prompt engineering, offer some utility but face notable limitations in scalability, robustness, and quality, ultimately constraining resulting models' capabilities.

In response, this talk presents our research on automated training data creation methods for enabling and improving instruction-guided and attribute-based image editing with diffusion models, explored from two directions: refining existing datasets and developing evaluation models to guide fine-tuning.

For instruction-guided image editing, we identify semantic misalignment between text instructions and before/after image pairs as a major limitation in current training datasets. We then propose a self-supervised method to detect and correct this misalignment, improving editing quality after fine-tuning on the corrected samples.

Additionally, we note that existing evaluation metrics often rely on models with limited semantic understanding. To address this, we fine-tune vision-language models as robust evaluators using high-quality synthetic data. These evaluators can also act as reward models to guide editing model training via reinforcement learning.
Extending this framework, we explore attribute-based editing with novel visual attributes. We introduce a web-crawling pipeline to curate samples for few-shot fine-tuning, enabling diffusion models to become attribute-aware. These models can generate diverse samples to train an attribute scorer which directs attribute-based editing.

Finally, we apply our methods to applications such as virtual try-on and reference- or stroke-guided editing by introducing new conditioning mechanisms within diffusion models. Together, these contributions enable scalable, high-quality training data generation for diffusion-based conditional image editing, which improves model performance, controllability, and generalization.

https://ucsb.zoom.us/j/81715448696

Reinforcement learning (RL) provides a framework for understanding how agents learn from interaction with their environm...
08/26/2025

Reinforcement learning (RL) provides a framework for understanding how agents learn from interaction with their environment, balancing exploration and exploitation to maximize long-term reward. In neuroscience and psychology, RL has become a central model for explaining human behavior, from trial-and-error learning to value-based decision-making, with computational variables such as prediction errors closely linked to neural signals in the brain. This convergence has made RL a powerful bridge between computer science and cognitive science.

In this talk, I will present works built around a unified spatial-temporal navigation task. First, I introduce an algorithm that expands the range of tasks current methods can solve. Second, I show how computational modeling can capture human behavior in this task and align with neural evidence. Finally, I will discuss ongoing work on interpretable and risk-aware RL frameworks for modeling strategy selection.

https://tinyurl.com/3ak8kv3b

We present an augmented reality (AR) design tool for clay 3D printing that brings traditional ceramic craft techniques i...
08/26/2025

We present an augmented reality (AR) design tool for clay 3D printing that brings traditional ceramic craft techniques into digital fabrication workflows. The system supports intuitive, embodied interaction by translating familiar physical tools—such as ribs and stamps—into modular digital counterparts used to shape and manipulate 3D-printed clay forms. Grounded in formative studies and iterative tool sketching, the workflow is tailored to the specific constraints and affordances of clay as a material, emphasizing direct toolpath control and hands-on interaction. By bridging physical and digital design spaces, this work explores how AR can support accessible, craft-informed approaches to fabrication and opens new directions for integrating traditional techniques for making into computational design.

Thermal analysis poses significant computational challenges in three-dimensional integrated circuit (3D-IC) design, wher...
08/26/2025

Thermal analysis poses significant computational challenges in three-dimensional integrated circuit (3D-IC) design, where accurate and efficient temperature prediction is essential for ensuring device performance and reliability. Recent advances in operator learning, particularly DeepOHeat and its variants, have substantially accelerated thermal simulation by learning direct mappings from diverse design configurations to temperature fields. DeepOHeat-v1 further pioneered a hybrid optimization workflow that integrates operator learning with finite difference methods (FDM) using the Generalized Minimal Residual (GMRES) algorithm for confidence-aware thermal optimization. This paper extends these developments by implementing an operator learning-based multigrid strategy based on DeepOHeat-v1 to enhance the convergence of GMRES. Our approach constructs a hybrid two-level multigrid that combines a Jacobi smoother with a DeepONet-based coarse space. The coarse space is constructed based on the subspace correction framework, building transfer operators by extracting trunk basis functions from a trained DeepOHeat model to map the problem to a smaller subspace. This coarse space effectively captures low-frequency error components by exploiting the spectral bias of neural networks, while traditional iterative methods address high-frequency components. We develop both additive and multiplicative variants of this hybrid multigrid method and evaluate their performance on industrial design test cases. Numerical results demonstrate that our approach substantially accelerates GMRES convergence compared to standard preconditioning methods, providing a robust and efficient solution for rapid thermal analysis. This enhanced computational framework enables more effective design optimization in complex 3D-IC environments, significantly reducing the computational cost while maintaining solution accuracy.

https://ucsb.zoom.us/j/87027261707

Vision-Language-Action (VLA) models are an emerging class of multimodal policies that combine internet-pretrained vision...
08/22/2025

Vision-Language-Action (VLA) models are an emerging class of multimodal policies that combine internet-pretrained vision-language models with an action head to perform robot manipulation tasks from raw visual inputs and language instructions. However, state-of-the-art VLAs often rely on large language models as backbones, making them computationally intensive and slow at inference time — posing challenges for real-time deployment in robotics. We focus on reducing the memory footprint and improving the inference latency of these models to make them viable on edge devices with limited compute. Observing high similarity in hidden states across adjacent transformer layers, we explore pruning redundancies in the LLM as a means of accelerating autoregressive action generation. We evaluate how this pruning impacts downstream robotics performance, both with and without additional fine-tuning.

https://ucsb.zoom.us/j/4922389261

Understanding how the brain integrates sensory inputs to guide behavior is essential for developing robust and generaliz...
08/22/2025

Understanding how the brain integrates sensory inputs to guide behavior is essential for developing robust and generalizable Artificial Intelligence (AI). As autonomous AI systems are increasingly deployed in real-world settings, ensuring their safe and reliable operation under unpredictable conditions is a critical challenge. This study compares the navigation strategies of mice to those of reinforcement learning (RL) agents trained with Proximal Policy Optimization (PPO) and equipped with a range of visual encoders, from simple feedforward models to deep convolutional and biologically inspired architectures. By training both mice and agents on the same virtual foraging task and evaluating generalization to unseen visual perturbations, we aim to identify strategic differences that underlie the robustness of biological navigation. This work also provides the baseline training pipeline for the Mouse vs AI: Robust Visual Foraging Competition at NeurIPS 2025 (robustforaging.github.io).

Address

Harold Frank Hall
Santa Barbara, CA
93106

Opening Hours

Monday 9am - 12pm
1pm - 4pm
Tuesday 9am - 12pm
1pm - 4pm
Wednesday 9am - 12pm
1pm - 4pm
Thursday 9am - 12pm
1pm - 4pm
Friday 9am - 12pm
1pm - 4pm

Telephone

(805) 893-4321

Alerts

Be the first to know and let us send you an email when UCSB Computer Science Department posts news and promotions. Your email address will not be used for any other purpose, and you can unsubscribe at any time.

Contact The Practice

Send a message to UCSB Computer Science Department:

Share

Share on Facebook Share on Twitter Share on LinkedIn
Share on Pinterest Share on Reddit Share via Email
Share on WhatsApp Share on Instagram Share on Telegram