Vision and System Design Lab

Who We Are

Welcome to HKUST Vision and System Design Lab. The focus of our lab includes design, optimization and compression of artificial intelligence (AI) models, as well as architecture and design of AI chips/systems for energy-efficient training and inference of such AI models. Currently, we place strong focus on multimodal large foundation models for computer vision, vision-language, and medical applications. Our Lab also collaborates closely with InnoHK AI Chip Center for Smart Emerging Systems on co-design and co-optimization of AI systems across application, algorithm and hardware layers.

News

  • June 2024.  Iterative Online Image Synthesis via Diffusion Model for Imbalanced Classification is accepted by MICCAI 2024. We introduce an Iterative Online Image Synthesis (IOIS) framework to address the class imbalance problem in medical image classification.
  • June 2024.  Rethinking Autoencoders for Medical Anomaly Detection from A Theoretical Perspective is accepted by MICCAI 2024. We for the first time provide a theoretical foundation for AE-based reconstruction methods in anomaly detection. By leveraging information theory, we elucidate the principles of these methods and reveal that the key to improving their performance lies in minimizing the information entropy of latent vectors.
  • May 2024.  Aligning Medical Images with General Knowledge from Large Language Models is early accepted by MICCAI 2024. We proposed ViP, a novel visual symptom-guided prompt learning framework for medical image analysis, which facilitates general knowledge transfer from CLIP. ViP mainly consists of two key components: a visual symptom generator and a dual-prompt network.
  • May 2024.  LENAS: Learning-based Neural Architecture Search and Ensemble for 3D Radiotherapy Dose Prediction is accepted by IEEE Transactions On Cybernetics. We proposed a novel learning-based ensemble approach named LENAS, which integrates neural architecture search with knowledge distillation for 3D radiotherapy dose prediction.
  • April 2024.  DoRA: Weight-Decomposed Low-Rank Adaptation is accepted by ICML 2024. We first introduced a novel weight decomposition analysis to investigate the inherent differences between FT and LoRA. Aiming to resemble the learning capacity of FT from the findings, we propose Weight-Decomposed LowRank Adaptation (DoRA).
  • March 2024.  BoNuS: Boundary Mining for Nuclei Segmentation with Partial Point Labels appears at IEEE Transactions on Medical Imaging. We proposed a novel boundary mining framework for nuclei segmentation, named BoNuS, which simultaneously learns nuclei interior and boundary information from the point labels.
  • March 2024.  Genetic Quantization-Aware Approximation for Non-Linear Operations in Transformers is accepted by DAC 2024. This paper proposed a genetic LUT-Approximation algorithm namely GQA-LUT that can automatically determine the parameters with quantization awareness. GQA-LUT enables the employment of INT8-based LUT-Approximation that achieves an area savings of 81.3~81.7% and a power reduction of 79.3~80.2% compared to the high-precision FP/INT 32 alternatives.
  • December 2023.  AdaP-CIM: Compute-in-Memory Based Neural Network Accelerator using Adaptive Posit is accepted by DATE 2024.
  • November 2023.  CAE-GReaT: Convolutional-Auxiliary Efficient Graph Reasoning Transformer for Dense Image Predictions is accepted by International Journal of Computer Vision. We proposed an auxiliary and integrated network architecture, named Convolutional-Auxiliary Efficient Graph Reasoning Transformer, which joints strengths of both CNNs and ViT into a uniform framework.
  • November 2023.  Dynamic Sub-Cluster-Aware Network for Few-Shot Skin Disease Classification is accepted by IEEE Transactions on Neural Networks and Learning Systems. This paper addresses the problem of few-shot skin disease classification by introducing a novel approach called the Sub-Cluster-Aware Network (SCAN) that enhances accuracy in diagnosing rare skin diseases.
  • October 2023.  Nuclei segmentation with point annotations from pathology images via self-supervised learning and co-training is accepted by Medical Image Analysis. We proposed a weakly-supervised learning method for nuclei segmentation that only requires point annotations for training.
  • July 2023.  Compete to Win: Enhancing Pseudo Labels for Barely-supervised Medical Image Segmentation is accepted by IEEE Transactions on Neural Networks and Learning Systems. We proposed a novel Compete-to-Win method to enhance the pseudo label quality. In contrast to directly using one model's predictions as pseudo labels, our key idea is that high-quality pseudo labels should be generated by comparing multiple confidence maps produced by different networks to select the most confident one.
  • Highlights

    Our publications

    Our Publications

    Our lab is at the forefront of cutting-edge AI research, covering a wide range of promising topics including AI chips/systems, compute-in-memory, electronic design automation, computer vision, vision-language, tiny machine learning, co-design, large foundational models, and medical image analysis.

    Our Research

    Our Research

    Our research aims to make advanced AI technologies more effective and efficient, enabling everyone to enjoy the convenience and pleasure brought by these transformative technologies that will shape the future.

    Our Demos

    Our Demos

    Technology should never be limited to papers and slides. We strongly believe that our success in projects and demos is valuable and can drive potential solutions to practical problems.

    Our Team

    Our Team

    We are a team of passionate researchers who are dedicated to pushing the forefront of artificial intelligence accelerator. We are committed to creating an inclusive environment for research and acknowledge the importance of diverse knowledge backgrounds in the discovery process.