Seguir
Hongcheng Gao
Título
Citado por
Citado por
Ano
Generative pretraining in multimodality
Q Sun, Q Yu, Y Cui, F Zhang, X Zhang, Y Wang, H Gao, J Liu, T Huang, ...
ICLR, 2024
822024
Exploring the universal vulnerability of prompt-based learning paradigm
L Xu, Y Chen, G Cui, H Gao, Z Liu
Findings of NAACL, 2022
542022
Why should adversarial perturbations be imperceptible? rethink the research paradigm in adversarial NLP
Y Chen, H Gao, G Cui, F Qi, L Huang, Z Liu, M Sun
EMNLP, 2022
222022
Revisiting Out-of-distribution Robustness in NLP: Benchmarks, Analysis, and LLMs Evaluations
L Yuan, Y Chen, G Cui, H Gao, F Zou, X Cheng, H Ji, Z Liu, M Sun
NeurIPS (Dataset and Benchmark Track) 36, 2023
212023
Textual backdoor attacks can be more harmful via two simple tricks
Y Chen, F Qi, H Gao, Z Liu, M Sun
EMNLP, 2022
102022
Efficient detection of LLM-generated texts with a Bayesian surrogate model
Z Deng, H Gao, Y Miao, H Zhang
Findings of ACL, 2024
92024
Evaluating the robustness of text-to-image diffusion models against real-world attacks
H Gao, H Zhang, Y Dong, Z Deng
arXiv preprint arXiv:2306.13103, 2023
82023
From adversarial arms race to model-centric evaluation: Motivating a unified automatic robustness evaluation framework
Y Chen, H Gao, G Cui, L Yuan, D Kong, H Wu, N Shi, B Yuan, L Huang, ...
Findings of ACL, 2023
42023
Universal Prompt Optimizer for Safe Text-to-Image Generation
Z Wu, H Gao, Y Wang, X Zhang, S Wang
NAACL, 2024
2024
O sistema não pode efectuar a operação agora. Tente mais tarde.
Artigos 1–9