Theo dơi
Boxin Wang
Boxin Wang
Research Scientist at NVIDIA
Email được xác minh tại - Trang chủ
Tiêu đề
Trích dẫn bởi
Trích dẫn bởi
Towards efficient data valuation based on the shapley value
R Jia, D Dao, B Wang, FA Hubis, N Hynes, NM Gürel, B Li, C Zhang, ...
The 22nd International Conference on Artificial Intelligence and Statistics …, 2019
Efficient task-specific data valuation for nearest neighbor algorithms
R Jia, D Dao, B Wang, FA Hubis, NM Gurel, B Li, C Zhang, CJ Spanos, ...
arXiv preprint arXiv:1908.08619, 2019
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models.
B Wang, W Chen, H Pei, C Xie, M Kang, C Zhang, C Xu, Z Xiong, R Dutta, ...
NeurIPS, 2023
Adversarial glue: A multi-task benchmark for robustness evaluation of language models
B Wang, C Xu, S Wang, Z Gan, Y Cheng, J Gao, AH Awadallah, B Li
arXiv preprint arXiv:2111.02840, 2021
Reinforcement-learning based portfolio management with augmented asset movement prediction states
Y Ye, H Pei, B Wang, PY Chen, Y Zhu, J Xiao, B Li
Proceedings of the AAAI conference on artificial intelligence 34 (01), 1112-1119, 2020
Infobert: Improving robustness of language models from an information theoretic perspective
B Wang, S Wang, Y Cheng, Z Gan, R Jia, B Li, J Liu
arXiv preprint arXiv:2010.02329, 2020
G-pate: Scalable differentially private data generator via private aggregation of teacher discriminators
Y Long, B Wang, Z Yang, B Kailkhura, A Zhang, C Gunter, B Li
Advances in Neural Information Processing Systems 34, 2965-2977, 2021
T3: Tree-autoencoder constrained adversarial text generation for targeted attack
B Wang, H Pei, B Pan, Q Chen, S Wang, B Li
arXiv preprint arXiv:1912.10375, 2019
Exploring the limits of domain-adaptive training for detoxifying large-scale language models
B Wang, W Ping, C Xiao, P Xu, M Patwary, M Shoeybi, B Li, ...
Advances in Neural Information Processing Systems 35, 35811-35824, 2022
SemAttack: Natural textual attacks via different semantic spaces
B Wang, C Xu, X Liu, Y Cheng, B Li
arXiv preprint arXiv:2205.01287, 2022
Datalens: Scalable privacy preserving training via gradient compression and aggregation
B Wang, F Wu, Y Long, L Rimanic, C Zhang, B Li
Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications …, 2021
Shall we pretrain autoregressive language models with retrieval? a comprehensive study
B Wang, W Ping, P Xu, L McAfee, Z Liu, M Shoeybi, Y Dong, O Kuchaiev, ...
arXiv preprint arXiv:2304.06762, 2023
Can Public Large Language Models Help Private Cross-device Federated Learning?
B Wang, YJ Zhang, Y Cao, B Li, HB McMahan, S Oh, Z Xu, M Zaheer
arXiv preprint arXiv:2305.12132, 2023
Uncovering the connections between adversarial transferability and knowledge transferability
K Liang, JY Zhang, B Wang, Z Yang, S Koyejo, B Li
International Conference on Machine Learning, 6577-6587, 2021
Instructretro: Instruction tuning post retrieval-augmented pretraining
B Wang, W Ping, L McAfee, P Xu, B Li, M Shoeybi, B Catanzaro
arXiv preprint arXiv:2310.07713, 2023
Certifying out-of-domain generalization for blackbox functions
MG Weber, L Li, B Wang, Z Zhao, B Li, C Zhang
International Conference on Machine Learning, 23527-23548, 2022
Improving certified robustness via statistical learning with logical reasoning
Z Yang, Z Zhao, B Wang, J Zhang, L Li, H Pei, B Karlaš, J Liu, H Guo, ...
Advances in Neural Information Processing Systems 35, 34859-34873, 2022
Incorporating external POS tagger for punctuation restoration
N Shi, W Wang, B Wang, J Li, X Liu, Z Lin
arXiv preprint arXiv:2106.06731, 2021
End-to-end robustness for sensing-reasoning machine learning pipelines
Z Yang, Z Zhao, H Pei, B Wang, B Karlas, J Liu, H Guo, B Li, C Zhang
arXiv preprint arXiv:2003.00120, 2020
Identifying and mitigating vulnerabilities in llm-integrated applications
F Jiang, Z Xu, L Niu, B Wang, J Jia, B Li, R Poovendran
arXiv preprint arXiv:2311.16153, 2023
Hệ thống không thể thực hiện thao tác ngay bây giờ. Hăy thử lại sau.
Bài viết 1–20