hero

Canary Freamwork

An Adversarial Robustness Evaluation Platform for Deep Models on Image Classification

Get Started →

Widest Methods Library

Canary provides researchers with a built-in library that integrates 29 widely used Attack Methods, 20 common Defense Methods in 4 categories (Planned) and 18 Deep Models.

Freest Integration Framework

Canary provides researchers with a fast integration framework, SEFI, which allows them to add just a few lines of python decorators to bring their self-implemented attacks, defense methods or models into the platform for testing and evaluation.

Most Comprehensive Evaluation

Canary provides researchers with method/model-independent metrics for 13 Attack Aspects, 3 Model Aspects, and 10 Defense Aspects (Planned) with benchmark rank ordering.

WARNING

The document you are currently reading is an early version, which is incomplete and is being edited by the author and will be updated continuously. We will provide the complete document as soon as possible, as well as the English version of the document. If you have questions, please contact jiazheng.sun@bit.edu.cn for more information.

# Evaluation is also AVAILABLE OUT-OF-THE-BOX

The ResNet model can be tested on the CIFAR10 dataset using the I-FGSM attack with just the following code:

import numpy as np
from canary_sefi.service.security_evaluation import SecurityEvaluation
from canary_sefi.task_manager import task_manager

# Load Model on Canary Lib
from canary_lib import canary_lib_model
from canary_lib import canary_lib_attacker
SEFI_component_manager.add_all(canary_lib_model)
SEFI_component_manager.add_all(canary_lib_attacker)

if __name__ == "__main__":
    # Init
    task_manager.init_task(show_logo=True, run_device="cuda")
    # Config
    config = {
      "dataset_size": 1,
      "dataset": "CIFAR10", # Config Dataset
      "model_list": [ "ResNet(CIFAR-10)" ], # Set Model
      "attacker_list": {"I_FGSM": ["ResNet(CIFAR-10)"]}, # Set Attacker Method
      "attacker_config": { # Set Attacker Parameters
        "I_FGSM": {
          "clip_min": 0, "clip_max": 1, "eps_iter": 2.5 * ((1 / 255) / 100), "nb_iter": 100,"norm": np.inf, "attack_type": "UNTARGETED", "epsilon": 1 / 255
        }
      }
    }
    # Run Evaluation
    security_evaluation = SecurityEvaluation(config)
    security_evaluation.attack_full_test()

# Citing our papers

We sincerely hope that Canary has been of some assistance to you, and we welcome you to cite our paper when using Canary to complete your scientific research:

@Article{electronics12173665,
  AUTHOR = {Sun, Jiazheng and Chen, Li and Xia, Chenxiao and Zhang, Da and Huang, Rong and Qiu, Zhi and Xiong, Wenqi and Zheng, Jun and Tan, Yu-An},
  TITLE = {CANARY: An Adversarial Robustness Evaluation Platform for Deep Learning Models on Image Classification},
  JOURNAL = {Electronics},
  VOLUME = {12},
  YEAR = {2023},
  NUMBER = {17},
  ARTICLE-NUMBER = {3665},
  URL = {https://www.mdpi.com/2079-9292/12/17/3665},
  ISSN = {2079-9292},
  DOI = {10.3390/electronics12173665}
}

# About Canary's Contributors

We thank the following members for their contributions to Canary:

Jiazheng Sun \ Li Chen \ Chenxiao Xia \ Da Zhang \ Wenqi Xiong \ Shujie Hu \ Jing Liu \ Zhi Qiu \ DongLi Tan \ Heng Ye \ Rong Huang \ Ruinan Ma \ Jiayao Yang \ Yangxiao Xu \ Dehua Zhu \ Guanting Wu

We are also particularly grateful to other open source projects for inspiring this project.

This project was completed under the guidance of Prof.Jun Zheng \ Prof.Yu'an Tan at the School of Cyberspace Science & Technology, Beijing Institute of Technology.

See about Canary for details.