國際事務

台印科研中心舉辦之線上研討會Generating Adversarial Examples by Makeup Attacks on Face [2020/10/20]

位於印度的台印科研中心將在10月25日,台灣時間中午12:30(印度時間上午10:00),舉辦線上研討會。
研討會主題為Generating Adversarial Examples by Makeup Attacks on Face Recognition Models
演講者為本校資工系江振國老師。

摘要:
Recent neural network models are proven to be powerful in many applications. This brings more and more attack methods which generate adversarial examples to decrease the recognition accuracy of deep neural models. Adversarial attack methods can be classified into two categories: white-box attack and black-box attack. White-box attack means that the attacker knows most of the architecture information. For black-box attack, the attacker does not know too much about architecture but still can produce examples with perturbation noises. Since face recognition is an important application in computer vision fields, face recognition models can be attacked by exaggerated wearing or facial accessories to dodge the correct identity.  In this talk, a novel makeup attack is introduced as white-box attack to transfer non-makeup images to makeup images where the perturbation information of the attack is hidden in the makeup areas.

有興趣者請至Webinar -Generating Adversarial Examples by Makeup Attacks on Face Recognition Models 填寫報名表單。
研討會的連結網址將於10月24日發通知給報名者。