publications

(*) denotes equal contribution

You can find full list of my publications and recent works on my Google Scholar.

2025

  1. workshop
    Towards Training One-Step Diffusion Models Without Distillation   generation
    Mingtian Zhang*Jiajun He*, Wenlin Chen*, and 4 more authors
    In Workshop on Deep Generative Model in Machine Learning @ ICLR 2025
  2. workshop
    No Trick, No Treat: Pursuits and Challenges Towards Simulation-free Training of Neural Samplers   sampling
    Jiajun He*, Yuanqi Du*, Francisco Vargas, and 5 more authors
    In Workshop on Frontiers in Probabilistic Inference: Sampling Meets Learning @ ICLR 2025
  3. AISTATS
    Training Neural Samplers with Reverse Diffusive KL Divergence   sampling
    Jiajun He*, Wenlin Chen*, Mingtian Zhang*, and 2 more authors
    In International Conference on Artificial Intelligence and Statistics (AISTATS)

2024

  1. workshop
    Getting Free Bits Back from Rotational Symmetries in LLMs   coding
    Jiajun He, Gergely Flamich, and José Miguel Hernández-Lobato
    In Workshop on Machine Learning and Compression @ NeurIPS
    Oral at Compression Workshop @ NeurIPS 2024 [Top 4%]
  2. NeurIPS
    Accelerating Relative Entropy Coding with Space Partitioning   coding
    Jiajun He, Gergely Flamich, and José Miguel Hernández-Lobato
    In Advances in Neural Information Processing Systems (NeurIPS)
  3. ICLR
    RECOMBINER: Robust and Enhanced Compression with Bayesian Implicit Neural Representations   coding
    Jiajun He*, Gergely Flamich*, Zongyu Guo, and 1 more author
    In International Conference on Learning Representations (ICLR)

2023

  1. NeurIPS
    Compression with bayesian implicit neural representations   coding
    Zongyu Guo*, Gergely Flamich*Jiajun He, and 2 more authors
    In Advances in Neural Information Processing Systems (NeurIPS)
    Spotlight at NeurIPS 2023
  2. Mphil Thesis
    Data Compression with Variational Implicit Neural Representations   coding
    Jiajun He
    University of Cambridge