I am a Research Scientist at Google Research in New York, where my main area of research is machine learning theory and algorithms. I received my PhD in Mathematics from the Courant Institute of Mathematical Sciences at NYU, where I was fortunate to be advised by Prof. Mehryar Mohri and to also work with Corinna Cortes.
Publications
Improved Balanced Classification with Theoretically Grounded Loss Functions.
In Advances in Neural Information Processing Systems (NeurIPS 2025). San Diego, California, 2025.
Balancing the Scales: A Theoretical and Algorithmic Framework for Learning from Imbalanced Data.
In Proceedings of the 42nd International Conference on Machine Learning (ICML 2025). Vancouver, Canada, 2025.
Mastering Multiple-Expert Routing: Realizable H-Consistency and Strong Guarantees for Learning to Defer.
In Proceedings of the 42nd International Conference on Machine Learning (ICML 2025). Vancouver, Canada, 2025.
Principled Algorithms for Optimizing Generalized Metrics in Binary Classification.
In Proceedings of the 42nd International Conference on Machine Learning (ICML 2025). Vancouver, Canada, 2025.
Enhanced H-Consistency Bounds.
In Proceedings of the 36th International Conference on Algorithmic Learning Theory (ALT 2025). Milan, Italy, 2025.
Fundamental Novel Consistency Theory: H-Consistency Bounds.
Ph.D. Dissertation, New York University. New York, NY, 2025.
Cardinality-Aware Set Prediction and Top-k Classification.
In Advances in Neural Information Processing Systems (NeurIPS 2024). Vancouver, Canada, 2024.
Multi-Label Learning with Stronger Consistency Guarantees.
In Advances in Neural Information Processing Systems (NeurIPS 2024). Vancouver, Canada, 2024.
Realizable H-Consistent and Bayes-Consistent Loss Functions for Learning to Defer.
In Advances in Neural Information Processing Systems (NeurIPS 2024). Vancouver, Canada, 2024.
A Universal Growth Rate for Learning with Smooth Surrogate Losses.
In Advances in Neural Information Processing Systems (NeurIPS 2024). Vancouver, Canada, 2024.
Regression with Multi-Expert Deferral.
In Proceedings of the 41st International Conference on Machine Learning (ICML 2024). Vienna, Austria, 2024.
(Spotlight Presentation)
H-Consistency Guarantees for Regression.
In Proceedings of the 41st International Conference on Machine Learning (ICML 2024). Vienna, Austria, 2024.
Theoretically Grounded Loss Functions and Algorithms for Score-Based Multi-Class Abstention.
In Twenty-seventh Conference on Artificial Intelligence and Statistics (AISTATS 2024). Valencia, Spain, 2024.
Learning to Reject with a Fixed Predictor: Application to Decontextualization.
In Twelfth International Conference on Learning Representations (ICLR 2024). Vienna, Austria, 2024.
Predictor-Rejector Multi-Class Abstention: Theoretical Analysis and Algorithms.
In Proceedings of the 35th International Conference on Algorithmic Learning Theory (ALT 2024). San Diego, California, 2024.
Principled Approaches for Learning to Defer with Multiple Experts.
In International Symposium on Artificial Intelligence and Mathematics (ISAIM 2024). Fort Lauderdale, Florida, 2024.
Two-stage learning to defer with multiple experts.
In Advances in Neural Information Processing Systems (NeurIPS 2023). New Orleans, Louisiana, 2023.
Structured prediction with stronger consistency guarantees.
In Advances in Neural Information Processing Systems (NeurIPS 2023). New Orleans, Louisiana, 2023.
H-consistency bounds: characterization and extensions.
In Advances in Neural Information Processing Systems (NeurIPS 2023). New Orleans, Louisiana, 2023.
Cross-entropy loss functions: Theoretical analysis and applications.
In Proceedings of the 40th International Conference on Machine Learning (ICML 2023). Honolulu, Hawaii, 2023.
H-consistency bounds for pairwise misranking loss surrogates.
In Proceedings of the 40th International Conference on Machine Learning (ICML 2023). Honolulu, Hawaii, 2023.
Ranking with Abstention.
ICML 2023 Workshop on The Many Facets of Preference-Based Learning. Honolulu, Hawaii, 2023.
DC-programming for neural network optimizations.
Journal of Global Optimization (JOGO), 2023.
Theoretically grounded loss functions and algorithms for adversarial robustness.
In Twenty-sixth Conference on Artificial Intelligence and Statistics (AISTATS 2023). Valencia, Spain, 2023.
Multi-class H-consistency bounds.
In Advances in Neural Information Processing Systems (NeurIPS 2022). New Orleans, Louisiana, 2022.
H-consistency bounds for surrogate loss minimizers.
In Proceedings of the 39th International Conference on Machine Learning (ICML 2022). Baltimore, MD, 2022.
(Long Presentation)
A Finer Calibration Analysis for Adversarial Robustness.
CoRR, abs/2105.01550, 2021.
Calibration and consistency of adversarial surrogate losses.
In Advances in Neural Information Processing Systems (NeurIPS 2021). Online, 2021.
(Spotlight Presentation)
Teaching
Contact