UCSB Computer Science Department

UCSB Computer Science Department Official page for the Computer Science Department at UC Santa Barbara | Follow for updates!

The Computer Science Department at the University of California, Santa Barbara welcomes you.

Do you love programming? Do you love problem solving? Do you love pizza?If yes – participate in the UCSB ACM ICPC Local ...
10/16/2025

Do you love programming? Do you love problem solving? Do you love pizza?

If yes – participate in the UCSB ACM ICPC Local Contest (see https://cs.ucsb.edu/~daniello/icpc2025/ for more info.) You get 5 hours of fun problem solving with friends, free pizza, as well as the chance to be crowned UCSB Programming Champion. Top performing student teams will be invited to participate in the SoCal ICPC Regionals (November 15th at Riverside City College).

When?

Sunday October 19th, 10:30 AM to 16:30 PM

Where?

TBD — check https://cs.ucsb.edu/~daniello/icpc2025/ for updates

Who can participate?

Everyone (students, faculty, spouses, pets) is welcome to participate in the local contest! However, only teams consisting of ICPC-eligible students (essentially if you’re started in college in 2021 or later you’re ICPC-eligible. See https://icpc.global/regionals/rules for exact rules) can qualify to the regionals.

How to register and participate?

Step 1: Find teammates! Participation is in teams of up to 3 students. Teams of 2 and individual participants are very welcome, nevertheless we encourage people to team up in groups of 3 as that makes it more fun and likely increases your winning chances.

Step 2: Make user accounts at open.kattis.com for each of the participants on your team.

Step 3: Go to https://cs.ucsb.edu/~daniello/icpc2025/ and click on the registration link, here you will be prompted for team name, as well as names, emails, and Kattis usernames of all your team members. Make sure to (a) write correct Kattis usernames for everyone, and (b) only register each team once. Deadline to register is Friday October 17th at 3pm!
Step 4: Show up at the contest location at 9:30 on Sunday 12th with pen and paper and one laptop per team. See https://cs.ucsb.edu/~daniello/icpc2025/ for more info!

Program analysis has produced a rich set of techniques for discovering, exercising, and demonstrating software bugs. The...
09/17/2025

Program analysis has produced a rich set of techniques for discovering, exercising, and demonstrating software bugs. These methods, however, often struggle to scale to the modern, complex, and stateful applications that underpin critical infrastructure. My research addresses this gap by advancing the techniques and practical applications of program analysis.

First, I will present how we extended symbolic execution to operate effectively on complex targets such as operating system kernel drivers. In particular, I will describe our symbolic analysis framework POPKORN, which is capable of finding real-world vulnerabilities in Windows kernel drivers while addressing long-standing challenges such as path explosion and environment modeling. POPKORN has since been adopted and extended by the community to uncover more than 100 vulnerable Windows kernel drivers.

Next, I will describe our work on integrating large language models (LLMs) into program analysis workflows. As part of the DARPA AI Cyber Challenge (AIxCC), we developed ARTIPHISHELL, a cyber reasoning system built on a distributed architecture of more than sixty cooperating components and AI agents. I will describe how ARTIPHISHELL combines traditional analyses with LLMs to improve the automated analysis of large open-source applications such as nginx, libxml2, and SQLite.

https://ucsb.zoom.us/j/2267578965

Foundation Models have achieved remarkable success across many domains, but the ever-growing model sizes and training to...
09/10/2025

Foundation Models have achieved remarkable success across many domains, but the ever-growing model sizes and training token counts pose significant challenges in computation, memory, communication, and overall resource cost. Consequently, efficient pre-training has become a critical area of research. Recent efforts address efficiency from multiple angles. At the algorithmic level, innovations such as low-rank factorization, sparse attention, and bottleneck architectures reduce FLOPs and memory usage. At the system level, kernel-level optimizations improve raw throughput, while graph- and runtime-level techniques enhance scalability across large clusters. Increasingly, the intersection of these directions—algorithm–system co-design—is emerging as a promising path forward. This talk will provide an overview of research on efficient foundation model pre-training from both algorithmic and system perspectives. I will also outline my future work on scaling efficient model architectures while fully exploiting hardware capabilities to develop more scalable and resource-efficient training systems.

Rust is a statically-compiled system programming language, created with an emphasis on memory safety. The Rust compiler ...
09/10/2025

Rust is a statically-compiled system programming language, created with an emphasis on memory safety. The Rust compiler can statically verify memory safety in many cases, but there exists memory safe code that cannot be verified by the compiler. For these cases developers can use the unsafe keyword to disable some of the compiler’s checks. As hand-written unsafe code can be prone to human error, programmers can use additional verification tools to ensure that the unsafe code satisfies the invariants of the Rust memory model. This talk will first discuss the required conditions for memory safety, how the scope of memory safety differs between different studies, and how the memory model compares to other system programming languages such as C or C++. Next, the talk will discuss the existing tools used for the static verification of Rust memory safety including symbolic execution, SMT, and semi-automated proof assistant tools. Finally, dynamic verification methods will be discussed, including memory sanitizers and dynamic tracing tools. New methods for improving the performance of dynamic verification will be discussed. Both static and dynamic tools will be compared with respect to their soundness, completeness, and performance.

Recent advances in fMRI-based visual reconstruction have enabled subject-agnostic approaches that leverage a shared, com...
09/09/2025

Recent advances in fMRI-based visual reconstruction have enabled subject-agnostic approaches that leverage a shared, common representational space. In this talk, I will introduce innovative methods for efficiently mapping individual brain signals into this unified space. Building on these promising results, I will discuss future directions aimed at integrating additional neuroimaging modalities—specifically EEG and MEG—with fMRI data. Additionally, I will explore how human visual brain representations compare with the representational spaces of large language models, focusing on their geometrical properties. Finally, I will outline plans to validate these methods experimentally, with the goal of broadening their impact within cognitive neuroscience.

https://ucsb.zoom.us/my/christos.z

Retrieval is a key component in enhancing Large Language Models (LLMs) with external information, improving accuracy, re...
09/04/2025

Retrieval is a key component in enhancing Large Language Models (LLMs) with external information, improving accuracy, recency, and contextual richness. Yet many pipelines use the same retrieval and reranking strategy for every query, wasting computation on simple cases, introducing irrelevant content, and struggling with reasoning-intensive tasks. Recent work on efficient and adaptive retrievals dynamically adjusts how much computation is used, where it is applied, and how information is organized by deciding when and how to retrieve, tuning granularity, reusing caches, constructing contexts selectively, and focusing ranking on the most promising candidates, ultimately aligning retrieval with task complexity to enable scalable systems that handle diverse and complex information needs more effectively. My talk will provide an overview of the above work and my research direction.

https://ucsb.zoom.us/my/gyuwankim

To achieve low latency sensing, actuation, and control, applications are increasingly embedded in the world around us, i...
09/04/2025

To achieve low latency sensing, actuation, and control, applications are increasingly embedded in the world around us, i.e. at the edge of the network. These applications provide automation, autonomy, situational awareness, and data-driven intelligence for local operations. Example applications include smart systems for agriculture, wildlife conservation, and physical infrastructure. These applications perform a wide range of communication and computation in technologically hostile environments using heterogeneous devices with strict power and network limitations. To efficiently compute over these distributed systems, we require robust application deployment and scheduling techniques that are specialized for these challenging settings. In this talk, I will overview the different methodologies available in the research space, what we've done to improve scheduling for the edge and propose new directions for this research.

https://tinyurl.com/yhuyuynf

Digital agriculture is the use of technology and advanced analytics to provide decision support and automation for farm ...
09/04/2025

Digital agriculture is the use of technology and advanced analytics to provide decision support and automation for farm operations. These advances enable farmers to reduce their costs and operational complexity, while enhancing farm productivity and sustainability. In this MAE, we investigate the use of Computational Fluid Dynamics (CFD) for digital agriculture applications. CFD modeling is a cost effective way to explore and estimate complex environmental and operating conditions such as those found on farms. Unfortunately, because of the computational complexity of CFD modeling, simulations can be time consuming to perform and thus difficult to use in real-time decision making (e.g. for irrigation control, frost protection, and spray applications). Therefore, we also explore alternative approaches that leverage machine learning (ML) and that optimize computational efficiency to reduce the overhead of using CFD ``in the loop''. In addition, to enable development of end-to-end applications on-farm, we investigate Internet of Things (IoT) and edge computing systems that leverage modeling and data-driven analytics to enable decision support and intelligent automation for agricultural settings.

https://ucsb.zoom.us/j/84591822970

The Ethereum blockchain and its decentralized finance (DeFi) ecosystem have fundamentally transformed financial infrastr...
09/04/2025

The Ethereum blockchain and its decentralized finance (DeFi) ecosystem have fundamentally transformed financial infrastructures, with over 100 billion USD in total value locked across thousands of interconnected protocols. With the growth of DeFi, the interactions between smart contracts have become increasingly complex, enabling advanced financial protocols like lending platforms and automated market makers. Nonetheless, bugs in smart contract interactions are a common cause of critical vulnerabilities: many services interact with contracts that must be trusted to manage digital assets, creating a web of dependencies where a single vulnerability can cascade across multiple protocols. As a result, hundreds of millions of dollars are stolen every year through exploits that target the subtle semantics of inter-contract communication.

The core security challenge lies not in simple coding errors, which existing tools readily detect, but in these complex multi-contract interactions. In this talk, I will address this fundamental gap by introducing two novel analysis techniques that systematically model, identify, and exploit multi-contract vulnerabilities at scale, culminating in GREED -- a versatile symbolic execution framework that empowers security researchers to rapidly prototype new analyses. Through automated discovery and synthesis of proof-of-concept exploits across millions of deployed contracts, my work demonstrates that inter-contract vulnerabilities represent a systemic threat to blockchain security, and provides the tools and methodologies necessary to detect and prevent these attacks before they result in financial losses.

Software bugs continue to pose significant challenges to modern society, causing considerable economic impact, and, in t...
08/26/2025

Software bugs continue to pose significant challenges to modern society, causing considerable economic impact, and, in the worst case, leading to catastrophic physical consequences. When bugs evolve into security vulnerabilities, the risk of intentional exploitation carried out by malicious actors escalates, potentially creating severe consequences for human rights and national security. Thus, identifying and addressing the root causes of software vulnerabilities (at scale) became crucial. However, automated vulnerability identification is an inherently complex task. First, the diversity and complexity of modern software systems require an understanding of many domain-specific details, making it impossible to create a one-size-fits-all solution. Secondly, automated security analyses need to strike an optimal balance between precision and efficiency: catching as many instances of a class of vulnerability as possible, while reducing false positives.

This talk provides insights into the evolution of program analysis techniques, particularly focusing on Domain-Driven Automated Security Analyses (DDASA). In particular, the goal of a DDASA is to first design custom “oracles” to detect classes of domain-specific vulnerabilities, and then, leverage a combination of static and dynamic analyses to identify such weaknesses. During this presentation, I will discuss my approach to designing practical, domain-specific security analyses for the identification of vulnerabilities in complex software systems (such as firmware and DeFi applications) and demonstrate their effectiveness on real-world targets.

https://ucsb.zoom.us/j/5604068241

Visual prostheses ("bionic eyes") aim to restore sight by electrically stimulating the retina or cortex, but current sys...
08/26/2025

Visual prostheses ("bionic eyes") aim to restore sight by electrically stimulating the retina or cortex, but current systems lack the intelligence to deliver consistent, high-quality percepts. This dissertation contributes to the development of a ‘Smart Bionic Eye’—a model-informed and user-adaptive vision restoration system—by introducing a computational framework that integrates deep learning, perceptual modeling, and human-in-the-loop optimization. The work begins with a data-driven model of phosphene appearance that predicts how perceptual features such as brightness, size, and shape vary with stimulus parameters. Trained on data spanning years of psychophysics and neuroanatomy, the model generalizes across electrodes and stimulation conditions and serves as the foundation for informed stimulus design. To solve the inverse problem of generating the electrical stimulus for a target percept, a deep neural network encoder is trained to invert the perceptual model. This encoder enables end-to-end optimization and consistently outperforms standard stimulation strategies in simulated users. To handle user variability and perceptual drift over time, the framework incorporates human-in-the- loop optimization using preferential Bayesian methods. This approach adapts stimulation strategies based on real-time user feedback and quickly converges to personalized solutions. Studies with sighted participants viewing simulated prosthetic vision demonstrate the method’s effectiveness and robustness to noise and model mismatch. Finally, the framework is extended to the visual cortex. Using neural recordings from a blind participant implanted with a 96-channel Utah array, a deep model is trained to predict single-trial neural responses and synthesize stimulation patterns that evoke targeted activity. Both inverse networks and gradient-based controllers outperform conventional techniques and better modulate evoked activity. Together, these contributions establish a scalable framework for intelligent visual prostheses that adapt to individual users and bring the Smart Bionic Eye closer to clinical reality.

https://ucsb.zoom.us/j/87572344837

Recent advances in diffusion-based image generation have enabled more diverse, high-quality image generation, opening ne...
08/26/2025

Recent advances in diffusion-based image generation have enabled more diverse, high-quality image generation, opening new possibilities in game development, filmmaking, and advertising. However, these tasks often require precise control over the generation process to meet specific artistic, narrative, or branding goals. This demands conditioning inputs such as text instructions, reference images, or visual attributes, which require training data that accurately reflect image-condition associations. Existing training data creation approaches, including manual annotation, data re-purposing, and prompt engineering, offer some utility but face notable limitations in scalability, robustness, and quality, ultimately constraining resulting models' capabilities.

In response, this talk presents our research on automated training data creation methods for enabling and improving instruction-guided and attribute-based image editing with diffusion models, explored from two directions: refining existing datasets and developing evaluation models to guide fine-tuning.

For instruction-guided image editing, we identify semantic misalignment between text instructions and before/after image pairs as a major limitation in current training datasets. We then propose a self-supervised method to detect and correct this misalignment, improving editing quality after fine-tuning on the corrected samples.

Additionally, we note that existing evaluation metrics often rely on models with limited semantic understanding. To address this, we fine-tune vision-language models as robust evaluators using high-quality synthetic data. These evaluators can also act as reward models to guide editing model training via reinforcement learning.
Extending this framework, we explore attribute-based editing with novel visual attributes. We introduce a web-crawling pipeline to curate samples for few-shot fine-tuning, enabling diffusion models to become attribute-aware. These models can generate diverse samples to train an attribute scorer which directs attribute-based editing.

Finally, we apply our methods to applications such as virtual try-on and reference- or stroke-guided editing by introducing new conditioning mechanisms within diffusion models. Together, these contributions enable scalable, high-quality training data generation for diffusion-based conditional image editing, which improves model performance, controllability, and generalization.

https://ucsb.zoom.us/j/81715448696

Address

Harold Frank Hall
Santa Barbara, CA
93106

Opening Hours

Monday 9am - 12pm
1pm - 4pm
Tuesday 9am - 12pm
1pm - 4pm
Wednesday 9am - 12pm
1pm - 4pm
Thursday 9am - 12pm
1pm - 4pm
Friday 9am - 12pm
1pm - 4pm

Telephone

(805) 893-4321

Alerts

Be the first to know and let us send you an email when UCSB Computer Science Department posts news and promotions. Your email address will not be used for any other purpose, and you can unsubscribe at any time.

Contact The Practice

Send a message to UCSB Computer Science Department:

Share

Share on Facebook Share on Twitter Share on LinkedIn
Share on Pinterest Share on Reddit Share via Email
Share on WhatsApp Share on Instagram Share on Telegram