Skip to content

Commit 47e52b8

Browse files
authored
Add Mordal ICLR'26 (#366)
* Mordal ICLR'26 Paper * Add Mordal ICLR'26
1 parent a11e7f5 commit 47e52b8

3 files changed

Lines changed: 29 additions & 0 deletions

File tree

source/_data/SymbioticLab.bib

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2313,3 +2313,24 @@ @Article{gputogrid:arxiv26
23132313
While the rapid expansion of data centers poses challenges for power grids, it also offers new opportunities as potentially flexible loads. Existing power system research often abstracts data centers as aggregate resources, while computer system research primarily focuses on optimizing GPU energy efficiency and largely ignores the grid impacts of optimized GPU power consumption. To bridge this gap, we develop a GPU-to-Grid framework that couples device-level GPU control with power system objectives. We study distribution-level voltage regulation enabled by flexibility in LLM inference, using batch size as a control knob that trades off the voltage impacts of GPU power consumption against inference latency and token throughput. We first formulate this problem as an optimization problem and then realize it as an online feedback optimization controller that leverages measurements from both the power grid and GPU systems. Our key insight is that reducing GPU power consumption alleviates violations of lower voltage limits, while increasing GPU power mitigates violations near upper voltage limits in distribution systems; this runs counter to the common belief that minimizing GPU power consumption is always beneficial to power grids.
23142314
}
23152315
}
2316+
2317+
@InProceedings{mordal:iclr26,
2318+
author = {Shiqi He and Insu Jang and Mosharaf Chowdhury},
2319+
booktitle = {ICLR},
2320+
title = {{Mordal}: Automated Pretrained Model Selection for Vision Language Models},
2321+
year = {2026},
2322+
month = {April},
2323+
publist_confkey = {ICLR'26},
2324+
publist_link = {paper || mordal-iclr26.pdf},
2325+
publist_topic = {Systems + AI},
2326+
publist_abstract = {
2327+
Incorporating multiple modalities into large language models (LLMs) is a powerful way to enhance their understanding of non-textual data, enabling them to perform multimodal tasks.
2328+
Vision language models (VLMs) form the fastest growing category of multimodal models because of their many practical use cases, including in healthcare, robotics, and accessibility.
2329+
Unfortunately, even though different VLMs in the literature demonstrate impressive visual capabilities in different benchmarks, they are handcrafted by human experts; there is no automated framework to create task-specific multimodal models.
2330+
2331+
We introduce Mordal, an automated multimodal model search framework that efficiently finds the best VLM for a user-defined task without manual intervention.
2332+
Mordal achieves this both by reducing the number of candidates to consider during the search process and by minimizing the time required to evaluate each remaining candidate.
2333+
Our evaluation shows that Mordal can find the best VLM for a given problem using $8.9\times$--$11.6\times$ lower GPU hours than grid search.
2334+
We have also discovered that Mordal achieves about 69\% higher weighted Kendall’s $\tau$ on average than the state-of-the-art model selection method across diverse tasks.
2335+
}
2336+
}
1.1 MB
Binary file not shown.

source/publications/index.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -466,6 +466,14 @@ venues:
466466
date: 2025-12-02
467467
url: https://neurips.cc/Conferences/2025
468468
acceptance: 24.91%
469+
ICLR:
470+
category: Conferences
471+
occurrences:
472+
- key: ICLR'26
473+
name: The 14th International Conference on Learning Representations
474+
date: 2026-04-23
475+
url: https://iclr.cc/Conferences/2026
476+
acceptance: 26.97%
469477
{% endpublist %}
470478

471479
---

0 commit comments

Comments
 (0)