{"id":2499,"date":"2025-10-17T11:52:09","date_gmt":"2025-10-17T03:52:09","guid":{"rendered":"https:\/\/cv.nirc.top\/?p=2499"},"modified":"2026-01-30T13:17:06","modified_gmt":"2026-01-30T05:17:06","slug":"unified-2d-3d-discrete-priors-for-noise-robust-and-calibration-free-multiview-3d-human-pose-estimation","status":"publish","type":"post","link":"https:\/\/cv.nirc.top\/zh\/2025\/unified-2d-3d-discrete-priors-for-noise-robust-and-calibration-free-multiview-3d-human-pose-estimation\/","title":{"rendered":"Unified 2D-3D Discrete Priors for Noise-Robust and Calibration-Free Multiview 3D Human Pose Estimation"},"content":{"rendered":"<div class=\"wp-block-group has-global-padding is-layout-constrained wp-block-group-is-layout-constrained\">\n<h2 class=\"wp-block-heading\">Abstract<\/h2>\n\n\n\n<p><\/p>\n\n\n\n<p>Multi-view 3D human pose estimation (HPE) leverages complementary information across views to improve accuracy and robustness. Traditional methods rely on camera calibration to establish geometric correspondences, which is sensitive to calibration accuracy and lacks flexibility in dynamic settings. Calibration-free approaches address these limitations by learning adaptive view interactions, typically leveraging expressive and flexible continuous representations. However, as the multiview interaction relationship is learned entirely from data without constraint, they are vulnerable to noisy input, which can propagate, amplify and accumulate errors across all views, severely corrupting the final estimated pose. To mitigate this, we propose a novel framework that integrates a noise-resilient discrete prior into the continuous representation-based model. Specifically, we introduce the UniCodebook, a unified, compact, robust, and discrete representation complementary to continuous features, allowing the model to benefit from robustness to noise while preserving regression capability. Futhermore, we propose an attribute-preserving and complementarity-enhancing Discrete-Continuous Spatial Attention (DCSA) mechanism to facilitate interaction between discrete priors and continuous pose features. Extensive experiments on three representative datasets demonstrate that our approach outperforms both calibration-required and calibration-free methods, achieving state-of-the-art performance.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Overview<\/h2>\n\n\n\n<p><\/p>\n\n\n\n<figure data-wp-context=\"{&quot;imageId&quot;:&quot;69f8bd69de678&quot;}\" data-wp-interactive=\"core\/image\" data-wp-key=\"69f8bd69de678\" class=\"wp-block-image size-large wp-lightbox-container\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"365\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on--click=\"actions.showLightbox\" data-wp-on--load=\"callbacks.setButtonStyles\" data-wp-on-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/cv.nirc.top\/wp-content\/uploads\/2025\/10\/arch_overview-1-1024x365.jpg\" alt=\"\" class=\"wp-image-2504\" srcset=\"https:\/\/cv.nirc.top\/wp-content\/uploads\/2025\/10\/arch_overview-1-1024x365.jpg 1024w, https:\/\/cv.nirc.top\/wp-content\/uploads\/2025\/10\/arch_overview-1-300x107.jpg 300w, https:\/\/cv.nirc.top\/wp-content\/uploads\/2025\/10\/arch_overview-1-768x274.jpg 768w, https:\/\/cv.nirc.top\/wp-content\/uploads\/2025\/10\/arch_overview-1-1536x547.jpg 1536w, https:\/\/cv.nirc.top\/wp-content\/uploads\/2025\/10\/arch_overview-1-2048x730.jpg 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><button\n\t\t\tclass=\"lightbox-trigger\"\n\t\t\ttype=\"button\"\n\t\t\taria-haspopup=\"dialog\"\n\t\t\taria-label=\"\u653e\u5927\"\n\t\t\tdata-wp-init=\"callbacks.initTriggerButton\"\n\t\t\tdata-wp-on--click=\"actions.showLightbox\"\n\t\t\tdata-wp-style--right=\"state.imageButtonRight\"\n\t\t\tdata-wp-style--top=\"state.imageButtonTop\"\n\t\t>\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewbox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\" \/>\n\t\t\t<\/svg>\n\t\t<\/button><figcaption class=\"wp-element-caption\">Compared with current SOTA: (a) <strong>Continuous transformer-based lifting method<\/strong>, which directly processes 2D pose inputs to estimate 3D poses. (b) <strong>Proposed method<\/strong>, which integrates discrete features as a robust prior within a continuous transformer-based framework, enhancing robustness to noisy 2D inputs and improving pose estimation accuracy.<\/figcaption><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<figure data-wp-context=\"{&quot;imageId&quot;:&quot;69f8bd69df6f5&quot;}\" data-wp-interactive=\"core\/image\" data-wp-key=\"69f8bd69df6f5\" class=\"wp-block-image size-large wp-lightbox-container\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"761\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on--click=\"actions.showLightbox\" data-wp-on--load=\"callbacks.setButtonStyles\" data-wp-on-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/cv.nirc.top\/wp-content\/uploads\/2025\/10\/arch_details-1-1024x761.jpg\" alt=\"\" class=\"wp-image-2505\" srcset=\"https:\/\/cv.nirc.top\/wp-content\/uploads\/2025\/10\/arch_details-1-1024x761.jpg 1024w, https:\/\/cv.nirc.top\/wp-content\/uploads\/2025\/10\/arch_details-1-300x223.jpg 300w, https:\/\/cv.nirc.top\/wp-content\/uploads\/2025\/10\/arch_details-1-768x571.jpg 768w, https:\/\/cv.nirc.top\/wp-content\/uploads\/2025\/10\/arch_details-1-1536x1142.jpg 1536w, https:\/\/cv.nirc.top\/wp-content\/uploads\/2025\/10\/arch_details-1-2048x1523.jpg 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><button\n\t\t\tclass=\"lightbox-trigger\"\n\t\t\ttype=\"button\"\n\t\t\taria-haspopup=\"dialog\"\n\t\t\taria-label=\"\u653e\u5927\"\n\t\t\tdata-wp-init=\"callbacks.initTriggerButton\"\n\t\t\tdata-wp-on--click=\"actions.showLightbox\"\n\t\t\tdata-wp-style--right=\"state.imageButtonRight\"\n\t\t\tdata-wp-style--top=\"state.imageButtonTop\"\n\t\t>\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewbox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\" \/>\n\t\t\t<\/svg>\n\t\t<\/button><figcaption class=\"wp-element-caption\">Two stages of the proposed calibration-free multiview 3D human pose lifting pipeline <strong>(a, b)<\/strong> and the detailed structure of the Spatial Multi-Head Self-Attention (MHSA) with Discrete-Continuous Spatial Attention (DCSA) <strong>(c)<\/strong>. In <strong>Stage I<\/strong>, we construct <strong>the UniCodebook<\/strong>, a unified discrete representation space, through a <strong>multi-strategy training scheme (2Dto2D, 2Dto3D, 3Dto2D, and 3Dto3D).<\/strong> In this space, both 2D and 3D poses are encoded as sets of discrete tokens in this shared space to bridge the representation gap between 2D and 3D data. In <strong>Stage II<\/strong>, a transformer-based continuous model is employed for pose lifting, where codebook tokens generated from the UniCodebook are injected into the hybrid spatial attention block. Here, the proposed DCSA mechanism is integrated with conventional MHSA to facilitate effective fusion between the noise-resilient discrete priors and expressive continuous pose features, which enhances the robustness to noisy 2D input.<\/figcaption><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Qualitative Results<\/h2>\n\n\n\n<p><\/p>\n\n\n\n<figure data-wp-context=\"{&quot;imageId&quot;:&quot;69f8bd69e0d2f&quot;}\" data-wp-interactive=\"core\/image\" data-wp-key=\"69f8bd69e0d2f\" class=\"wp-block-image size-large wp-lightbox-container\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"635\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on--click=\"actions.showLightbox\" data-wp-on--load=\"callbacks.setButtonStyles\" data-wp-on-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/cv.nirc.top\/wp-content\/uploads\/2025\/10\/qualitative_results_improved_cropped-1-1024x635.png\" alt=\"\" class=\"wp-image-2510\" srcset=\"https:\/\/cv.nirc.top\/wp-content\/uploads\/2025\/10\/qualitative_results_improved_cropped-1-1024x635.png 1024w, https:\/\/cv.nirc.top\/wp-content\/uploads\/2025\/10\/qualitative_results_improved_cropped-1-300x186.png 300w, https:\/\/cv.nirc.top\/wp-content\/uploads\/2025\/10\/qualitative_results_improved_cropped-1-768x476.png 768w, https:\/\/cv.nirc.top\/wp-content\/uploads\/2025\/10\/qualitative_results_improved_cropped-1.png 1483w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><button\n\t\t\tclass=\"lightbox-trigger\"\n\t\t\ttype=\"button\"\n\t\t\taria-haspopup=\"dialog\"\n\t\t\taria-label=\"\u653e\u5927\"\n\t\t\tdata-wp-init=\"callbacks.initTriggerButton\"\n\t\t\tdata-wp-on--click=\"actions.showLightbox\"\n\t\t\tdata-wp-style--right=\"state.imageButtonRight\"\n\t\t\tdata-wp-style--top=\"state.imageButtonTop\"\n\t\t>\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewbox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\" \/>\n\t\t\t<\/svg>\n\t\t<\/button><figcaption class=\"wp-element-caption\">Qualitative comparisons of 3D human poses estimated by the <strong>baseline <\/strong>and the <strong>baseline with codebook<\/strong>. The <strong>orange skeleton denotes the prediction<\/strong>, while the <strong>green skeleton indicates the ground truth<\/strong>. Additionally, we visualize the <strong>joint-to-joint attention heatmap and DCSA heatmap<\/strong> (joint-to-DiscreteToken Attention in the figure) in the first spatial block. Both models are trained with 4 views, but for space efficiency, we only present the images and predictions from view 0.<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Comparison with SOTA<\/h2>\n\n\n\n<p><\/p>\n\n\n\n<figure data-wp-context=\"{&quot;imageId&quot;:&quot;69f8bd69e22e0&quot;}\" data-wp-interactive=\"core\/image\" data-wp-key=\"69f8bd69e22e0\" class=\"wp-block-image size-large wp-lightbox-container\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"392\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on--click=\"actions.showLightbox\" data-wp-on--load=\"callbacks.setButtonStyles\" data-wp-on-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/cv.nirc.top\/wp-content\/uploads\/2025\/10\/sota_with_h36m-1-1-1024x392.png\" alt=\"\" class=\"wp-image-2521\" srcset=\"https:\/\/cv.nirc.top\/wp-content\/uploads\/2025\/10\/sota_with_h36m-1-1-1024x392.png 1024w, https:\/\/cv.nirc.top\/wp-content\/uploads\/2025\/10\/sota_with_h36m-1-1-300x115.png 300w, https:\/\/cv.nirc.top\/wp-content\/uploads\/2025\/10\/sota_with_h36m-1-1-768x294.png 768w, https:\/\/cv.nirc.top\/wp-content\/uploads\/2025\/10\/sota_with_h36m-1-1-1536x589.png 1536w, https:\/\/cv.nirc.top\/wp-content\/uploads\/2025\/10\/sota_with_h36m-1-1.png 1678w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><button\n\t\t\tclass=\"lightbox-trigger\"\n\t\t\ttype=\"button\"\n\t\t\taria-haspopup=\"dialog\"\n\t\t\taria-label=\"\u653e\u5927\"\n\t\t\tdata-wp-init=\"callbacks.initTriggerButton\"\n\t\t\tdata-wp-on--click=\"actions.showLightbox\"\n\t\t\tdata-wp-style--right=\"state.imageButtonRight\"\n\t\t\tdata-wp-style--top=\"state.imageButtonTop\"\n\t\t>\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewbox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\" \/>\n\t\t\t<\/svg>\n\t\t<\/button><figcaption class=\"wp-element-caption\">Results on Human3.6M are reported using MPJPE as the evaluation metric. CPN, HRNet and ResNet152 are different 2D pose detectors. GT means using ground truth 2D pose. * means this is an image-to-3d method. \u2020 indicates our reimplementation. T represents the number of frames.<\/figcaption><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Noise Robustness<\/h2>\n\n\n\n<p><\/p>\n\n\n\n<figure data-wp-context=\"{&quot;imageId&quot;:&quot;69f8bd69e3c55&quot;}\" data-wp-interactive=\"core\/image\" data-wp-key=\"69f8bd69e3c55\" class=\"wp-block-image size-large wp-lightbox-container\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"299\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on--click=\"actions.showLightbox\" data-wp-on--load=\"callbacks.setButtonStyles\" data-wp-on-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/cv.nirc.top\/wp-content\/uploads\/2025\/10\/ablation_noise-1-1024x299.png\" alt=\"\" class=\"wp-image-2525\" srcset=\"https:\/\/cv.nirc.top\/wp-content\/uploads\/2025\/10\/ablation_noise-1-1024x299.png 1024w, https:\/\/cv.nirc.top\/wp-content\/uploads\/2025\/10\/ablation_noise-1-300x88.png 300w, https:\/\/cv.nirc.top\/wp-content\/uploads\/2025\/10\/ablation_noise-1-768x224.png 768w, https:\/\/cv.nirc.top\/wp-content\/uploads\/2025\/10\/ablation_noise-1-1536x448.png 1536w, https:\/\/cv.nirc.top\/wp-content\/uploads\/2025\/10\/ablation_noise-1-2048x598.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><button\n\t\t\tclass=\"lightbox-trigger\"\n\t\t\ttype=\"button\"\n\t\t\taria-haspopup=\"dialog\"\n\t\t\taria-label=\"\u653e\u5927\"\n\t\t\tdata-wp-init=\"callbacks.initTriggerButton\"\n\t\t\tdata-wp-on--click=\"actions.showLightbox\"\n\t\t\tdata-wp-style--right=\"state.imageButtonRight\"\n\t\t\tdata-wp-style--top=\"state.imageButtonTop\"\n\t\t>\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewbox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\" \/>\n\t\t\t<\/svg>\n\t\t<\/button><figcaption class=\"wp-element-caption\">Comparison of MPJPE error across four models (<em>i.e.<\/em>, baseline trained on H36M CPN, baseline with codebook trained on H36M CPN, baseline trained on H36M GT, and baseline with codebook trained on H36M GT) under varying noise intensities without retraining. For each instance (consisting of multi-view 2D poses of the same person at the same timestamp), we randomly select 1 to 4 views and add Gaussian noise with zero mean and a standard deviation of  &#8221;Noise Intensity&#8221; pixels to each 2D joint. For models trained on H36M CPN, we evaluate them using H36M CPN test data with extra noise. Similarly, models trained on H36M GT are evaluated with H36M GT test data with extra noise. <strong>The results show that models with the codebook exhibit robustness across all noise levels, with greater robustness observed at higher noise intensities.<\/strong><\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Bibtex<\/h2>\n\n\n\n<p><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>@inproceedings{chen2025unicodebook,\n  title={{Unified 2D-3D Discrete Priors for Noise-Robust and Calibration-Free Multiview 3D Human Pose Estimation}},\n  author={Chen, Geng and Ren, Pengfei and Jian, Xufeng and Sun, Haifeng and Zhang, Menghao and Qi, Qi and Zhuang, Zirui and Wang, Jing and Liao, Jianxin and Wang, Jingyu},\n  booktitle={Advances in Neural Information Processing Systems},\n  year={2025}\n}<\/code><\/pre>\n\n\n\n<p><\/p>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>Abstract Multi-view 3D human pose estimation (HPE) leverages complementary information across views to improve accuracy and robustness. Traditional methods rely on camera calibration to establish geometric correspondences, which is sensitive to calibration accuracy and lacks flexibility in dynamic settings. Calibration-free approaches address these limitations by learning adaptive view interactions, typically leveraging expressive and flexible continuous [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":2527,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":"","_links_to":"","_links_to_target":""},"categories":[25],"tags":[],"class_list":["post-2499","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-multi-view-3d-pose-estimation"],"acf":{"writer":{"simple_value_formatted":"<code><em>This data type is not supported! Please contact the author for help.<\/em><\/code>","value_formatted":[{"writer_link":{"simple_value_formatted":"<a href=\"https:\/\/dblp.uni-trier.de\/pid\/76\/4764-6.html\" target=\"\">Geng Chen*<\/a>","value_formatted":{"title":"Geng Chen*","url":"https:\/\/dblp.uni-trier.de\/pid\/76\/4764-6.html","target":""},"value":{"title":"Geng Chen*","url":"https:\/\/dblp.uni-trier.de\/pid\/76\/4764-6.html","target":""},"field":{"ID":2368,"key":"field_687f0c15f3394","label":"\u4f5c\u8005\u4e0e\u4f5c\u8005\u4e3b\u9875","name":"writer_link","aria-label":"","prefix":"acf","type":"link","value":null,"menu_order":0,"instructions":"","required":1,"id":"","class":"","conditional_logic":0,"parent":2366,"wrapper":{"width":"","class":"","id":""},"return_format":"array","allow_in_bindings":1,"_name":"writer_link","_valid":1,"parent_repeater":"field_687f08dfb7e07"}}},{"writer_link":{"simple_value_formatted":"<a href=\"https:\/\/pengfeiren96.github.io\/\" target=\"\">Pengfei Ren*<\/a>","value_formatted":{"title":"Pengfei Ren*","url":"https:\/\/pengfeiren96.github.io\/","target":""},"value":{"title":"Pengfei Ren*","url":"https:\/\/pengfeiren96.github.io\/","target":""},"field":{"ID":2368,"key":"field_687f0c15f3394","label":"\u4f5c\u8005\u4e0e\u4f5c\u8005\u4e3b\u9875","name":"writer_link","aria-label":"","prefix":"acf","type":"link","value":null,"menu_order":0,"instructions":"","required":1,"id":"","class":"","conditional_logic":0,"parent":2366,"wrapper":{"width":"","class":"","id":""},"return_format":"array","allow_in_bindings":1,"_name":"writer_link","_valid":1,"parent_repeater":"field_687f08dfb7e07"}}},{"writer_link":{"simple_value_formatted":"<a href=\"https:\/\/dblp.uni-trier.de\/pid\/415\/6378.html\" target=\"\">Xufeng Jian<\/a>","value_formatted":{"title":"Xufeng Jian","url":"https:\/\/dblp.uni-trier.de\/pid\/415\/6378.html","target":""},"value":{"title":"Xufeng Jian","url":"https:\/\/dblp.uni-trier.de\/pid\/415\/6378.html","target":""},"field":{"ID":2368,"key":"field_687f0c15f3394","label":"\u4f5c\u8005\u4e0e\u4f5c\u8005\u4e3b\u9875","name":"writer_link","aria-label":"","prefix":"acf","type":"link","value":null,"menu_order":0,"instructions":"","required":1,"id":"","class":"","conditional_logic":0,"parent":2366,"wrapper":{"width":"","class":"","id":""},"return_format":"array","allow_in_bindings":1,"_name":"writer_link","_valid":1,"parent_repeater":"field_687f08dfb7e07"}}},{"writer_link":{"simple_value_formatted":"<a href=\"https:\/\/scholar.google.com\/citations?user=dwhbTsEAAAAJ\" target=\"\">Haifeng Sun\u2020<\/a>","value_formatted":{"title":"Haifeng Sun\u2020","url":"https:\/\/scholar.google.com\/citations?user=dwhbTsEAAAAJ","target":""},"value":{"title":"Haifeng Sun\u2020","url":"https:\/\/scholar.google.com\/citations?user=dwhbTsEAAAAJ","target":""},"field":{"ID":2368,"key":"field_687f0c15f3394","label":"\u4f5c\u8005\u4e0e\u4f5c\u8005\u4e3b\u9875","name":"writer_link","aria-label":"","prefix":"acf","type":"link","value":null,"menu_order":0,"instructions":"","required":1,"id":"","class":"","conditional_logic":0,"parent":2366,"wrapper":{"width":"","class":"","id":""},"return_format":"array","allow_in_bindings":1,"_name":"writer_link","_valid":1,"parent_repeater":"field_687f08dfb7e07"}}},{"writer_link":{"simple_value_formatted":"<a href=\"https:\/\/scholar.google.com\/citations?user=ISnvxZQAAAAJ&hl=zh-CN\" target=\"\">Menghao Zhang<\/a>","value_formatted":{"title":"Menghao Zhang","url":"https:\/\/scholar.google.com\/citations?user=ISnvxZQAAAAJ&hl=zh-CN","target":""},"value":{"title":"Menghao Zhang","url":"https:\/\/scholar.google.com\/citations?user=ISnvxZQAAAAJ&hl=zh-CN","target":""},"field":{"ID":2368,"key":"field_687f0c15f3394","label":"\u4f5c\u8005\u4e0e\u4f5c\u8005\u4e3b\u9875","name":"writer_link","aria-label":"","prefix":"acf","type":"link","value":null,"menu_order":0,"instructions":"","required":1,"id":"","class":"","conditional_logic":0,"parent":2366,"wrapper":{"width":"","class":"","id":""},"return_format":"array","allow_in_bindings":1,"_name":"writer_link","_valid":1,"parent_repeater":"field_687f08dfb7e07"}}},{"writer_link":{"simple_value_formatted":"<a href=\"https:\/\/scholar.google.com\/citations?user=2W2h0SwAAAAJ\" target=\"\">Qi Qi<\/a>","value_formatted":{"title":"Qi Qi","url":"https:\/\/scholar.google.com\/citations?user=2W2h0SwAAAAJ","target":""},"value":{"title":"Qi Qi","url":"https:\/\/scholar.google.com\/citations?user=2W2h0SwAAAAJ","target":""},"field":{"ID":2368,"key":"field_687f0c15f3394","label":"\u4f5c\u8005\u4e0e\u4f5c\u8005\u4e3b\u9875","name":"writer_link","aria-label":"","prefix":"acf","type":"link","value":null,"menu_order":0,"instructions":"","required":1,"id":"","class":"","conditional_logic":0,"parent":2366,"wrapper":{"width":"","class":"","id":""},"return_format":"array","allow_in_bindings":1,"_name":"writer_link","_valid":1,"parent_repeater":"field_687f08dfb7e07"}}},{"writer_link":{"simple_value_formatted":"<a href=\"https:\/\/scholar.google.com\/citations?user=j74lPwkAAAAJ&hl=en\" target=\"\">Zirui Zhuang<\/a>","value_formatted":{"title":"Zirui Zhuang","url":"https:\/\/scholar.google.com\/citations?user=j74lPwkAAAAJ&hl=en","target":""},"value":{"title":"Zirui Zhuang","url":"https:\/\/scholar.google.com\/citations?user=j74lPwkAAAAJ&hl=en","target":""},"field":{"ID":2368,"key":"field_687f0c15f3394","label":"\u4f5c\u8005\u4e0e\u4f5c\u8005\u4e3b\u9875","name":"writer_link","aria-label":"","prefix":"acf","type":"link","value":null,"menu_order":0,"instructions":"","required":1,"id":"","class":"","conditional_logic":0,"parent":2366,"wrapper":{"width":"","class":"","id":""},"return_format":"array","allow_in_bindings":1,"_name":"writer_link","_valid":1,"parent_repeater":"field_687f08dfb7e07"}}},{"writer_link":{"simple_value_formatted":"<a href=\"https:\/\/teacher.bupt.edu.cn\/wangjing\" target=\"\">Jing Wang<\/a>","value_formatted":{"title":"Jing Wang","url":"https:\/\/teacher.bupt.edu.cn\/wangjing","target":""},"value":{"title":"Jing Wang","url":"https:\/\/teacher.bupt.edu.cn\/wangjing","target":""},"field":{"ID":2368,"key":"field_687f0c15f3394","label":"\u4f5c\u8005\u4e0e\u4f5c\u8005\u4e3b\u9875","name":"writer_link","aria-label":"","prefix":"acf","type":"link","value":null,"menu_order":0,"instructions":"","required":1,"id":"","class":"","conditional_logic":0,"parent":2366,"wrapper":{"width":"","class":"","id":""},"return_format":"array","allow_in_bindings":1,"_name":"writer_link","_valid":1,"parent_repeater":"field_687f08dfb7e07"}}},{"writer_link":{"simple_value_formatted":"<a href=\"https:\/\/dblp.org\/pid\/60\/4951.html\" target=\"\">Jianxin Liao<\/a>","value_formatted":{"title":"Jianxin Liao","url":"https:\/\/dblp.org\/pid\/60\/4951.html","target":""},"value":{"title":"Jianxin Liao","url":"https:\/\/dblp.org\/pid\/60\/4951.html","target":""},"field":{"ID":2368,"key":"field_687f0c15f3394","label":"\u4f5c\u8005\u4e0e\u4f5c\u8005\u4e3b\u9875","name":"writer_link","aria-label":"","prefix":"acf","type":"link","value":null,"menu_order":0,"instructions":"","required":1,"id":"","class":"","conditional_logic":0,"parent":2366,"wrapper":{"width":"","class":"","id":""},"return_format":"array","allow_in_bindings":1,"_name":"writer_link","_valid":1,"parent_repeater":"field_687f08dfb7e07"}}},{"writer_link":{"simple_value_formatted":"<a href=\"https:\/\/jericwang.github.io\/\" target=\"\">Jingyu Wang\u2020<\/a>","value_formatted":{"title":"Jingyu Wang\u2020","url":"https:\/\/jericwang.github.io\/","target":""},"value":{"title":"Jingyu Wang\u2020","url":"https:\/\/jericwang.github.io\/","target":""},"field":{"ID":2368,"key":"field_687f0c15f3394","label":"\u4f5c\u8005\u4e0e\u4f5c\u8005\u4e3b\u9875","name":"writer_link","aria-label":"","prefix":"acf","type":"link","value":null,"menu_order":0,"instructions":"","required":1,"id":"","class":"","conditional_logic":0,"parent":2366,"wrapper":{"width":"","class":"","id":""},"return_format":"array","allow_in_bindings":1,"_name":"writer_link","_valid":1,"parent_repeater":"field_687f08dfb7e07"}}}],"value":[{"field_687f0c15f3394":{"title":"Geng Chen*","url":"https:\/\/dblp.uni-trier.de\/pid\/76\/4764-6.html","target":""}},{"field_687f0c15f3394":{"title":"Pengfei Ren*","url":"https:\/\/pengfeiren96.github.io\/","target":""}},{"field_687f0c15f3394":{"title":"Xufeng Jian","url":"https:\/\/dblp.uni-trier.de\/pid\/415\/6378.html","target":""}},{"field_687f0c15f3394":{"title":"Haifeng Sun\u2020","url":"https:\/\/scholar.google.com\/citations?user=dwhbTsEAAAAJ","target":""}},{"field_687f0c15f3394":{"title":"Menghao Zhang","url":"https:\/\/scholar.google.com\/citations?user=ISnvxZQAAAAJ&hl=zh-CN","target":""}},{"field_687f0c15f3394":{"title":"Qi Qi","url":"https:\/\/scholar.google.com\/citations?user=2W2h0SwAAAAJ","target":""}},{"field_687f0c15f3394":{"title":"Zirui Zhuang","url":"https:\/\/scholar.google.com\/citations?user=j74lPwkAAAAJ&hl=en","target":""}},{"field_687f0c15f3394":{"title":"Jing Wang","url":"https:\/\/teacher.bupt.edu.cn\/wangjing","target":""}},{"field_687f0c15f3394":{"title":"Jianxin Liao","url":"https:\/\/dblp.org\/pid\/60\/4951.html","target":""}},{"field_687f0c15f3394":{"title":"Jingyu Wang\u2020","url":"https:\/\/jericwang.github.io\/","target":""}}],"field":{"ID":2366,"key":"field_687f08dfb7e07","label":"\u4f5c\u8005","name":"writer","aria-label":"","prefix":"acf","type":"repeater","value":null,"menu_order":0,"instructions":"","required":1,"id":"","class":"","conditional_logic":0,"parent":52,"wrapper":{"width":"","class":"","id":""},"acfe_repeater_stylised_button":0,"layout":"row","pagination":0,"min":0,"max":0,"collapsed":"","button_label":"Add Row","rows_per_page":20,"_name":"writer","_valid":1,"sub_fields":[{"ID":2368,"key":"field_687f0c15f3394","label":"\u4f5c\u8005\u4e0e\u4f5c\u8005\u4e3b\u9875","name":"writer_link","aria-label":"","prefix":"acf","type":"link","value":null,"menu_order":0,"instructions":"","required":1,"id":"","class":"","conditional_logic":0,"parent":2366,"wrapper":{"width":"","class":"","id":""},"return_format":"array","allow_in_bindings":1,"_name":"writer_link","_valid":1,"parent_repeater":"field_687f08dfb7e07"}]}},"\u4f1a\u8bae\u540d\u79f0":{"simple_value_formatted":"NeurIPS","value_formatted":"NeurIPS","value":"NeurIPS","field":{"ID":53,"key":"field_6759c5b33fdb3","label":"\u4f1a\u8bae\u540d\u79f0","name":"\u4f1a\u8bae\u540d\u79f0","aria-label":"","prefix":"acf","type":"text","value":null,"menu_order":1,"instructions":"","required":1,"id":"","class":"","conditional_logic":0,"parent":52,"wrapper":{"width":"","class":"","id":""},"default_value":"\u586b\u5199\u4f1a\u8bae","maxlength":"","allow_in_bindings":1,"placeholder":"","prepend":"","append":"","_name":"\u4f1a\u8bae\u540d\u79f0","_valid":1}},"\u5e74":{"simple_value_formatted":"2025","value_formatted":"2025","value":"2025","field":{"ID":254,"key":"field_675b036a7706e","label":"\u5e74","name":"\u5e74","aria-label":"","prefix":"acf","type":"text","value":null,"menu_order":2,"instructions":"","required":1,"id":"","class":"","conditional_logic":0,"parent":52,"wrapper":{"width":"","class":"","id":""},"default_value":2024,"maxlength":"","allow_in_bindings":1,"placeholder":"","prepend":"","append":"","_name":"\u5e74","_valid":1}},"code":{"simple_value_formatted":"","value_formatted":"","value":"","field":{"ID":54,"key":"field_6759c5dc3fdb4","label":"code","name":"code","aria-label":"","prefix":"acf","type":"link","value":null,"menu_order":3,"instructions":"","required":0,"id":"","class":"","conditional_logic":0,"parent":52,"wrapper":{"width":"","class":"","id":""},"return_format":"array","allow_in_bindings":1,"_name":"code","_valid":1}},"arxiv":{"simple_value_formatted":"","value_formatted":"","value":"","field":{"ID":55,"key":"field_6759c5f83fdb5","label":"arXiv","name":"arxiv","aria-label":"","prefix":"acf","type":"link","value":null,"menu_order":4,"instructions":"","required":0,"id":"","class":"","conditional_logic":0,"parent":52,"wrapper":{"width":"","class":"","id":""},"return_format":"array","allow_in_bindings":0,"_name":"arxiv","_valid":1}},"pdf":{"simple_value_formatted":"<a href=\"https:\/\/neurips.cc\/virtual\/2025\/loc\/san-diego\/poster\/117592\" target=\"_blank\" rel=\"noreferrer noopener\">PDF<\/a>","value_formatted":{"title":"PDF","url":"https:\/\/neurips.cc\/virtual\/2025\/loc\/san-diego\/poster\/117592","target":"_blank"},"value":{"title":"PDF","url":"https:\/\/neurips.cc\/virtual\/2025\/loc\/san-diego\/poster\/117592","target":"_blank"},"field":{"ID":56,"key":"field_6759c6b83fdb6","label":"pdf","name":"pdf","aria-label":"","prefix":"acf","type":"link","value":null,"menu_order":5,"instructions":"","required":0,"id":"","class":"","conditional_logic":0,"parent":52,"wrapper":{"width":"","class":"","id":""},"return_format":"array","allow_in_bindings":0,"_name":"pdf","_valid":1}},"rank":{"simple_value_formatted":"CCF-A","value_formatted":"CCF-A","value":"CCF-A","field":{"ID":2316,"key":"field_686b28a2069eb","label":"\u4f1a\u8bae\/\u671f\u520a\u7ea7\u522b","name":"rank","aria-label":"","prefix":"acf","type":"text","value":null,"menu_order":6,"instructions":"","required":0,"id":"","class":"","conditional_logic":0,"parent":52,"wrapper":{"width":"","class":"","id":""},"default_value":"CCF-A","maxlength":"","allow_in_bindings":0,"placeholder":"","prepend":"","append":"","_name":"rank","_valid":1}}},"_links":{"self":[{"href":"https:\/\/cv.nirc.top\/zh\/wp-json\/wp\/v2\/posts\/2499","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cv.nirc.top\/zh\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cv.nirc.top\/zh\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/cv.nirc.top\/zh\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/cv.nirc.top\/zh\/wp-json\/wp\/v2\/comments?post=2499"}],"version-history":[{"count":13,"href":"https:\/\/cv.nirc.top\/zh\/wp-json\/wp\/v2\/posts\/2499\/revisions"}],"predecessor-version":[{"id":2612,"href":"https:\/\/cv.nirc.top\/zh\/wp-json\/wp\/v2\/posts\/2499\/revisions\/2612"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cv.nirc.top\/zh\/wp-json\/wp\/v2\/media\/2527"}],"wp:attachment":[{"href":"https:\/\/cv.nirc.top\/zh\/wp-json\/wp\/v2\/media?parent=2499"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cv.nirc.top\/zh\/wp-json\/wp\/v2\/categories?post=2499"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cv.nirc.top\/zh\/wp-json\/wp\/v2\/tags?post=2499"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}