Continual Diffusion: Continual Customization of Text-to-Image Diffusion with C-LoRA


    James Seale Smith1,2, Yen-Chang Hsu1, Lingyu Zhang1, Ting Hua1

    Zsolt Kira2, Yilin Shen1, Hongxia Jin1


    1Samsung Research America, 2Georgia Institute of Technology

    Transactions on Machine Learning Research (TMLR) 2024


    paper

    A use case of our work - a mobile app sequentially learns new customized concepts. At a later time, the user can generate photos of prior learned concepts. The user should be able to generate photos with multiple concepts together, thus ruling out methods such as per-concept adapters or single-image conditioned diffusion. Furthermore, the concepts are fine-grained, and simply learning new tokens or words is not effective.

    Abstract


    Recent works demonstrate a remarkable ability to customize text-to-image diffusion models while only providing a few example images. What happens if you try to customize such models using multiple, fine-grained concepts in a sequential (i.e., continual) manner? In our work, we show that recent state-of-the-art customization of text-to-image models suffer from catastrophic forgetting when new concepts arrive sequentially. Specifically, when adding a new concept, the ability to generate high quality images of past, similar concepts degrade. To circumvent this forgetting, we propose a new method, C-LoRA, composed of a continually self-regularized low-rank adaptation in cross attention layers of the popular Stable Diffusion model. Furthermore, we use customization prompts which do not include the word of the customized object (i.e., person for a human face dataset) and are initialized as completely random embeddings. Importantly, our method induces only marginal additional parameter costs and requires no storage of user data for replay. We show that C-LoRA not only outperforms several baselines for our proposed setting of text-to-image continual customization, which we refer to as Continual Diffusion, but that we achieve a new state-of-the-art in the well-established rehearsal-free continual learning setting for image classification. The high achieving performance of C-LoRA in two separate domains positions it as a compelling solution for a wide range of applications, and we believe it has significant potential for practical impact.


    Method


    Our method, C-LoRA, updates the key-value (K-V) projection in U-Net cross-attention modules of Stable Diffusion using a continual, self-regulating low-rank weight adaptation. The past LoRA weight deltas are used to regulate the new LoRA weight deltas by guiding which parameters are most available to be updated. Unlike prior work, we initialize custom tokens as random features and remove the concept name (e.g., person) from the prompt.

    Results: Faces


    Qualitative results of continual customization using the Celeb-A HQ dataset. Results are shown for three concepts from the learning sequence sampled after training ten concepts sequentially.


    Multi-concept results after training on 10 sequential tasks using Celeb-A HQ. Using standard quadrant numbering (I is upper right, II is upper left, III is lower left, IV is lower right), we label which target data belongs in which generated image by directly annotating the target data images.

    Results: Landmarks


    Qualitative results of continual customization using waterfalls from the Google Landmarks dataset. Results are shown for three concepts from the learning sequence sampled after training ten concepts sequentially.

    BibTeX

                    @article{smith2024continualdiffusion,
                      title={Continual Diffusion: Continual Customization of Text-to-Image Diffusion with C-LoRA},
                      author={Smith, James Seale and Hsu, Yen-Chang and Zhang, Lingyu and Hua, Ting and Kira, Zsolt and Shen, Yilin and Jin, Hongxia},
                      journal={Transactions on Machine Learning Research},
                      issn={2835-8856},
                      year={2024}
                    }
                  

    主站蜘蛛池模板: 日韩三级一区二区| 国产婷婷一区二区三区| 亚洲日韩一区精品射精| 国产福利一区二区在线视频| 成人免费观看一区二区| 国产亚洲3p无码一区二区| 国产高清精品一区| 无码8090精品久久一区| 国产一区韩国女主播| 亚洲sm另类一区二区三区| 在线日产精品一区| 亲子乱av一区区三区40岁| 精品熟人妻一区二区三区四区不卡| 国产探花在线精品一区二区| 国产亚洲情侣一区二区无码AV| 亚洲国产成人精品久久久国产成人一区二区三区综 | 韩国资源视频一区二区三区| 无码国产精品一区二区免费式芒果| 久久久久人妻一区精品| 久久综合精品国产一区二区三区| 人妻无码一区二区三区四区| 无码日韩人妻AV一区免费l| 高清一区二区三区| 熟女性饥渴一区二区三区| 免费看一区二区三区四区| 国产色综合一区二区三区| 久久综合精品不卡一区二区| 亚洲av无码一区二区三区不卡| 亚洲av无码一区二区三区不卡| 久久国产精品无码一区二区三区| 精品一区二区三区在线成人| 男插女高潮一区二区| 国偷自产视频一区二区久| 一区二区国产在线播放| 日韩亚洲一区二区三区| 中文字幕无线码一区2020青青| 在线不卡一区二区三区日韩| 国产高清在线精品一区小说| 伊人激情AV一区二区三区| 蜜桃无码一区二区三区| 鲁丝丝国产一区二区|