To address the rising challenges of copyright violations, misuse, and ethical concerns in generative AI, Professor Yu-Chiang Frank Wang and his research team at the Department of Electrical Engineering, National Taiwan University (NTU), have developed a groundbreaking technique called “Concept Erasing”—formally named Receler (Reliable Concept Erasing via Lightweight Erasers). Supported by Taiwan’s National Science and Technology Council (NSTC), this technology enables precise removal of high-risk concepts—such as gore, violence, deepfake impersonations, or the artistic styles of specific creators or brands—without retraining the entire generative model.
The team’s research was presented at the European Conference on Computer Vision (ECCV) 2024, one of the world’s top three international conferences in computer vision. Since its release, Receler has rapidly gained attention across the global AI research community, with increasing citations on Google Scholar and widespread adoption on GitHub, where it is freely available as open-source software.
While generative AI has revolutionized creativity—allowing users to produce professional-quality content with unprecedented ease—it has also raised concerns over violent or explicit imagery, deepfake fraud, and unauthorized style imitation. Traditional keyword filters and manual review processes are often insufficient, either missing problematic outputs or blocking legitimate content. Receler offers a more reliable solution: once a concept such as “violence” is erased, the AI model can no longer produce violent imagery even when prompted directly or metaphorically. Similarly, if a protected artistic style (e.g., Studio Ghibli) is removed, the model instead generates neutral, non-infringing imagery.
By integrating Receler, AI platforms, educational institutions, brands, and government agencies can maintain creative utility while ensuring trustworthy, auditable, and ethically sound AI systems. With AI safety as a national priority, Taiwan continues to invest in advancing research that strengthens model robustness, interpretability, and privacy protection—fostering a responsible, human-centered AI ecosystem for the future.
Media Contact :
Shu-Chun Chen Associate Research
Shih-Yu Huang Associate Research
Department of Engineering and Technologies
National Science and Technology Council
Tel: +886(2)2737-7775