当前位置: 首页 > news >正文

【论文阅读】PRADA: Protecting Against DNN Model Stealing Attacks(2019)

在这里插入图片描述

摘要

Machine learning (ML) applications(应用) are increasingly prevalent(越来越普遍). Protecting the confidentiality(机密性) of ML models becomes paramount(重要) for two reasons: (a) a model can be a business advan tage(商业优势) to its owner, and (b) an adversary may use a stolen model to find transferable adversarial examples(寻找可转移的对抗示例) that can evade(逃避) classification by the original model. Access to the model(模型的访问权限) can be restricted(限制在) to be only via well-defined prediction APIs(只能通过定义良好的预测APIs). Nevertheless(尽管如此), prediction APIs still provide enough information(提供足够多的信息) to allow an adversary to mount model extraction attacks(模型提取攻击) by sending repeated queries(发生重复的查询) via the prediction API.
In this paper, we describe(描述) new model extraction attacks using novel approaches(新的方法) for generating synthetic queries(生产合成查询), and optimizing training hyperparameters(优化训练超参数). Our attacks outperform(优于) state-of-the art (最先进)model extraction in terms of transferability(可转移性) of both targeted and non-targeted adversarial examples(目标和非目标的对抗示例) (up to +29-44 percentage points 百分点, pp), and prediction accuracy(预测准确度&


http://www.mrgr.cn/news/3894.html

相关文章:

  • [数据集][目标检测]流水线物件检测数据集VOC+YOLO格式9255张26类别
  • Oracle(78)什么是绑定变量(Bind Variable)?
  • 圆弧的起点端点和凸度计算圆心、离散化为多段线
  • Java 前端与后端交互:解锁 RESTful API 设计的秘密
  • pyqt5用QPainter在扇形上面描绘数据点,并可点击提示
  • 优雅处理枚举前端丢失大Long精度问题
  • Debian12安装jdk8环境
  • Real DOM, Virtual DOM, Shadow DOM,之间有什么区别
  • 【今夕是何年】雅达利发布Atari 7800+游戏主机:配备无线手柄、HDMI接口
  • 通用人工智能不应该完全以人类为标准
  • CSS的:dir()伪类:根据文本方向定制样式的指南
  • idea 项目启动慢,报内存溢出,调整jvm参数
  • 简历相关!!
  • 详解golang内存管理
  • 自定义View实例
  • XSS的一些相关案例及DOM破坏的案例
  • 最新动态鲨鱼导航网引导页html源码
  • C++:模板 II(非类型模板参数,特化,分离编译)
  • Google 开发者大会东南亚制胜攻略分享
  • 5.9.9 串级PID控制器