aaron sidford cv
CoRR abs/2101.05719 ( 2021 ) Internatioonal Conference of Machine Learning (ICML), 2022, Semi-Streaming Bipartite Matching in Fewer Passes and Optimal Space I regularly advise Stanford students from a variety of departments. Lower bounds for finding stationary points I, Accelerated Methods for NonConvex Optimization, SIAM Journal on Optimization, 2018 (arXiv), Parallelizing Stochastic Gradient Descent for Least Squares Regression: Mini-batching, Averaging, and Model Misspecification. If you see any typos or issues, feel free to email me. We will start with a primer week to learn the very basics of continuous optimization (July 26 - July 30), followed by two weeks of talks by the speakers on more advanced . Algorithms Optimization and Numerical Analysis. With Prateek Jain, Sham M. Kakade, Rahul Kidambi, and Praneeth Netrapalli. He received his PhD from the Electrical Engineering and Computer Science Department at the Massachusetts Institute of Technology, where he was advised by Jonathan Kelner. With Jack Murtagh, Omer Reingold, and Salil P. Vadhan. Yin Tat Lee and Aaron Sidford. With Jan van den Brand, Yin Tat Lee, Danupon Nanongkai, Richard Peng, Thatchaphol Saranurak, Zhao Song, and Di Wang. . IEEE, 147-156. endobj In Sidford's dissertation, Iterative Methods, Combinatorial . In submission. Email / Their, This "Cited by" count includes citations to the following articles in Scholar. theory and graph applications. ", "Sample complexity for average-reward MDPs? arXiv | conference pdf (alphabetical authorship), Jonathan Kelner, Annie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant, Honglin Yuan, Big-Step-Little-Step: Gradient Methods for Objectives with Multiple Scales. Michael B. Cohen, Yin Tat Lee, Gary L. Miller, Jakub Pachocki, and Aaron Sidford. to appear in Neural Information Processing Systems (NeurIPS), 2022, Regularized Box-Simplex Games and Dynamic Decremental Bipartite Matching /Length 11 0 R with Aaron Sidford in Mathematics and B.A. To appear as a contributed talk at QIP 2023 ; Quantum Pseudoentanglement. Previously, I was a visiting researcher at the Max Planck Institute for Informatics and a Simons-Berkeley Postdoctoral Researcher. Eigenvalues of the laplacian and their relationship to the connectedness of a graph. in math and computer science from Swarthmore College in 2008. with Arun Jambulapati, Aaron Sidford and Kevin Tian The following articles are merged in Scholar. COLT, 2022. Here are some lecture notes that I have written over the years. Aaron Sidford is part of Stanford Profiles, official site for faculty, postdocs, students and staff information (Expertise, Bio, Research, Publications, and more). Thesis, 2016. pdf. ?_l) I am a senior researcher in the Algorithms group at Microsoft Research Redmond. We organize regular talks and if you are interested and are Stanford affiliated, feel free to reach out (from a Stanford email). 172 Gates Computer Science Building 353 Jane Stanford Way Stanford University missouri noodling association president cnn. arXiv preprint arXiv:2301.00457, 2023 arXiv. Winter 2020 Teaching assistant for EE364a: Convex Optimization I taught by John Duchi, Fall 2018 Teaching assitant for CS265/CME309: Randomized Algorithms and Probabilistic Analysis, Fall 2019 taught by Greg Valiant. AISTATS, 2021. I completed my PhD at Oral Presentation for Misspecification in Prediction Problems and Robustness via Improper Learning. Before attending Stanford, I graduated from MIT in May 2018. [pdf] [poster] They will share a $10,000 prize, with financial sponsorship provided by Google Inc. Simple MAP inference via low-rank relaxations. AISTATS, 2021. Aaron Sidford is an Assistant Professor in the departments of Management Science and Engineering and Computer Science at Stanford University. >CV >code >contact; My PhD dissertation, Algorithmic Approaches to Statistical Questions, 2012. Janardhan Kulkarni, Yang P. Liu, Ashwin Sah, Mehtaab Sawhney, Jakub Tarnawski, Fully Dynamic Electrical Flows: Sparse Maxflow Faster Than Goldberg-Rao, FOCS 2021 Information about your use of this site is shared with Google. Efficient Convex Optimization Requires Superlinear Memory. I develop new iterative methods and dynamic algorithms that complement each other, resulting in improved optimization algorithms. I often do not respond to emails about applications. 113 * 2016: The system can't perform the operation now. However, many advances have come from a continuous viewpoint. BayLearn, 2021, On the Sample Complexity of Average-reward MDPs MI #~__ Q$.R$sg%f,a6GTLEQ!/B)EogEA?l kJ^- \?l{ P&d\EAt{6~/fJq2bFn6g0O"yD|TyED0Ok-\~[`|4P,w\A8vD$+)%@P4 0L ` ,\@2R 4f rl1 publications by categories in reversed chronological order. I am broadly interested in mathematics and theoretical computer science. 4026. My research is on the design and theoretical analysis of efficient algorithms and data structures. Sequential Matrix Completion. Neural Information Processing Systems (NeurIPS, Oral), 2020, Coordinate Methods for Matrix Games (arXiv), A Faster Cutting Plane Method and its Implications for Combinatorial and Convex Optimization, In Symposium on Foundations of Computer Science (FOCS 2015), Machtey Award for Best Student Paper (arXiv), Efficient Inverse Maintenance and Faster Algorithms for Linear Programming, In Symposium on Foundations of Computer Science (FOCS 2015) (arXiv), Competing with the Empirical Risk Minimizer in a Single Pass, With Roy Frostig, Rong Ge, and Sham Kakade, In Conference on Learning Theory (COLT 2015) (arXiv), Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization, In International Conference on Machine Learning (ICML 2015) (arXiv), Uniform Sampling for Matrix Approximation, With Michael B. Cohen, Yin Tat Lee, Cameron Musco, Christopher Musco, and Richard Peng, In Innovations in Theoretical Computer Science (ITCS 2015) (arXiv), Path-Finding Methods for Linear Programming : Solving Linear Programs in (rank) Iterations and Faster Algorithms for Maximum Flow, In Symposium on Foundations of Computer Science (FOCS 2014), Best Paper Award and Machtey Award for Best Student Paper (arXiv), Single Pass Spectral Sparsification in Dynamic Streams, With Michael Kapralov, Yin Tat Lee, Cameron Musco, and Christopher Musco, An Almost-Linear-Time Algorithm for Approximate Max Flow in Undirected Graphs, and its Multicommodity Generalizations, With Jonathan A. Kelner, Yin Tat Lee, and Lorenzo Orecchia, In Symposium on Discrete Algorithms (SODA 2014), Efficient Accelerated Coordinate Descent Methods and Faster Algorithms for Solving Linear Systems, In Symposium on Fondations of Computer Science (FOCS 2013) (arXiv), A Simple, Combinatorial Algorithm for Solving SDD Systems in Nearly-Linear Time, With Jonathan A. Kelner, Lorenzo Orecchia, and Zeyuan Allen Zhu, In Symposium on the Theory of Computing (STOC 2013) (arXiv), SIAM Journal on Computing (arXiv before merge), Derandomization beyond Connectivity: Undirected Laplacian Systems in Nearly Logarithmic Space, With Jack Murtagh, Omer Reingold, and Salil Vadhan, Book chapter in Building Bridges II: Mathematics of Laszlo Lovasz, 2020 (arXiv), Lower Bounds for Finding Stationary Points II: First-Order Methods. /Producer (Apache FOP Version 1.0) Office: 380-T ", "A general continuous optimization framework for better dynamic (decremental) matching algorithms. << >> We are excited to have Professor Sidford join the Management Science & Engineering faculty starting Fall 2016. We provide a generic technique for constructing families of submodular functions to obtain lower bounds for submodular function minimization (SFM). Contact. Improved Lower Bounds for Submodular Function Minimization. I am broadly interested in mathematics and theoretical computer science. (, In Symposium on Foundations of Computer Science (FOCS 2015) (, In Conference on Learning Theory (COLT 2015) (, In International Conference on Machine Learning (ICML 2015) (, In Innovations in Theoretical Computer Science (ITCS 2015) (, In Symposium on Fondations of Computer Science (FOCS 2013) (, In Symposium on the Theory of Computing (STOC 2013) (, Book chapter in Building Bridges II: Mathematics of Laszlo Lovasz, 2020 (, Journal of Machine Learning Research, 2017 (. with Hilal Asi, Yair Carmon, Arun Jambulapati and Aaron Sidford Prior to coming to Stanford, in 2018 I received my Bachelor's degree in Applied Math at Fudan SHUFE, Oct. 2022 - Algorithm Seminar, Google Research, Oct. 2022 - Young Researcher Workshop, Cornell ORIE, Apr. Optimization Algorithms: I used variants of these notes to accompany the courses Introduction to Optimization Theory and Optimization Algorithms which I created. With Rong Ge, Chi Jin, Sham M. Kakade, and Praneeth Netrapalli. I am an assistant professor in the department of Management Science and Engineering and the department of Computer Science at Stanford University. Aaron Sidford (sidford@stanford.edu) Welcome This page has informatoin and lecture notes from the course "Introduction to Optimization Theory" (MS&E213 / CS 269O) which I taught in Fall 2019. This work presents an accelerated gradient method for nonconvex optimization problems with Lipschitz continuous first and second derivatives that is Hessian free, i.e., it only requires gradient computations, and is therefore suitable for large-scale applications. CS265/CME309: Randomized Algorithms and Probabilistic Analysis, Fall 2019. Nearly Optimal Communication and Query Complexity of Bipartite Matching . [pdf] [slides] Allen Liu. 2023. . The ones marked, 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, 424-433, SIAM Journal on Optimization 28 (2), 1751-1772, Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, 1049-1065, 2013 ieee 54th annual symposium on foundations of computer science, 147-156, Proceedings of the forty-fifth annual ACM symposium on Theory of computing, MB Cohen, YT Lee, C Musco, C Musco, R Peng, A Sidford, Proceedings of the 2015 Conference on Innovations in Theoretical Computer, Advances in Neural Information Processing Systems 31, M Kapralov, YT Lee, CN Musco, CP Musco, A Sidford, SIAM Journal on Computing 46 (1), 456-477, P Jain, S Kakade, R Kidambi, P Netrapalli, A Sidford, MB Cohen, YT Lee, G Miller, J Pachocki, A Sidford, Proceedings of the forty-eighth annual ACM symposium on Theory of Computing, International Conference on Machine Learning, 2540-2548, P Jain, SM Kakade, R Kidambi, P Netrapalli, A Sidford, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, 230-249, Mathematical Programming 184 (1-2), 71-120, P Jain, C Jin, SM Kakade, P Netrapalli, A Sidford, International conference on machine learning, 654-663, Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete, D Garber, E Hazan, C Jin, SM Kakade, C Musco, P Netrapalli, A Sidford, New articles related to this author's research, Path finding methods for linear programming: Solving linear programs in o (vrank) iterations and faster algorithms for maximum flow, Accelerated methods for nonconvex optimization, An almost-linear-time algorithm for approximate max flow in undirected graphs, and its multicommodity generalizations, A faster cutting plane method and its implications for combinatorial and convex optimization, Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems, A simple, combinatorial algorithm for solving SDD systems in nearly-linear time, Uniform sampling for matrix approximation, Near-optimal time and sample complexities for solving Markov decision processes with a generative model, Single pass spectral sparsification in dynamic streams, Parallelizing stochastic gradient descent for least squares regression: mini-batching, averaging, and model misspecification, Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization, Accelerating stochastic gradient descent for least squares regression, Efficient inverse maintenance and faster algorithms for linear programming, Lower bounds for finding stationary points I, Streaming pca: Matching matrix bernstein and near-optimal finite sample guarantees for ojas algorithm, Convex Until Proven Guilty: Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions, Competing with the empirical risk minimizer in a single pass, Variance reduced value iteration and faster algorithms for solving Markov decision processes, Robust shift-and-invert preconditioning: Faster and more sample efficient algorithms for eigenvector computation. resume/cv; publications. Np%p `a!2D4! Before joining Stanford in Fall 2016, I was an NSF post-doctoral fellow at Carnegie Mellon University ; I received a Ph.D. in mathematics from the University of Michigan in 2014, and a B.A. I am affiliated with the Stanford Theory Group and Stanford Operations Research Group. Aaron Sidford. he Complexity of Infinite-Horizon General-Sum Stochastic Games, Yujia Jin, Vidya Muthukumar, Aaron Sidford, Innovations in Theoretical Computer Science (ITCS 202, air Carmon, Danielle Hausler, Arun Jambulapati, and Yujia Jin, Advances in Neural Information Processing Systems (NeurIPS 2022), Moses Charikar, Zhihao Jiang, and Kirankumar Shiragur, Advances in Neural Information Processing Systems (NeurIPS 202, n Symposium on Foundations of Computer Science (FOCS 2022) (, International Conference on Machine Learning (ICML 2022) (, Conference on Learning Theory (COLT 2022) (, International Colloquium on Automata, Languages and Programming (ICALP 2022) (, In Symposium on Theory of Computing (STOC 2022) (, In Symposium on Discrete Algorithms (SODA 2022) (, In Advances in Neural Information Processing Systems (NeurIPS 2021) (, In Conference on Learning Theory (COLT 2021) (, In International Conference on Machine Learning (ICML 2021) (, In Symposium on Theory of Computing (STOC 2021) (, In Symposium on Discrete Algorithms (SODA 2021) (, In Innovations in Theoretical Computer Science (ITCS 2021) (, In Conference on Neural Information Processing Systems (NeurIPS 2020) (, In Symposium on Foundations of Computer Science (FOCS 2020) (, In International Conference on Artificial Intelligence and Statistics (AISTATS 2020) (, In International Conference on Machine Learning (ICML 2020) (, In Conference on Learning Theory (COLT 2020) (, In Symposium on Theory of Computing (STOC 2020) (, In International Conference on Algorithmic Learning Theory (ALT 2020) (, In Symposium on Discrete Algorithms (SODA 2020) (, In Conference on Neural Information Processing Systems (NeurIPS 2019) (, In Symposium on Foundations of Computer Science (FOCS 2019) (, In Conference on Learning Theory (COLT 2019) (, In Symposium on Theory of Computing (STOC 2019) (, In Symposium on Discrete Algorithms (SODA 2019) (, In Conference on Neural Information Processing Systems (NeurIPS 2018) (, In Symposium on Foundations of Computer Science (FOCS 2018) (, In Conference on Learning Theory (COLT 2018) (, In Symposium on Discrete Algorithms (SODA 2018) (, In Innovations in Theoretical Computer Science (ITCS 2018) (, In Symposium on Foundations of Computer Science (FOCS 2017) (, In International Conference on Machine Learning (ICML 2017) (, In Symposium on Theory of Computing (STOC 2017) (, In Symposium on Foundations of Computer Science (FOCS 2016) (, In Symposium on Theory of Computing (STOC 2016) (, In Conference on Learning Theory (COLT 2016) (, In International Conference on Machine Learning (ICML 2016) (, In International Conference on Machine Learning (ICML 2016). I am generally interested in algorithms and learning theory, particularly developing algorithms for machine learning with provable guarantees. Annie Marsden, Vatsal Sharan, Aaron Sidford, and Gregory Valiant, Efficient Convex Optimization Requires Superlinear Memory. [pdf] [talk] [poster] In this talk, I will present a new algorithm for solving linear programs. Yin Tat Lee and Aaron Sidford; An almost-linear-time algorithm for approximate max flow in undirected graphs, and its multicommodity generalizations. Nima Anari, Yang P. Liu, Thuy-Duong Vuong, Maximum Flow and Minimum-Cost Flow in Almost Linear Time, FOCS 2022, Best Paper If you have been admitted to Stanford, please reach out to discuss the possibility of rotating or working together. with Yair Carmon, Aaron Sidford and Kevin Tian 2022 - current Assistant Professor, Georgia Institute of Technology (Georgia Tech) 2022 Visiting researcher, Max Planck Institute for Informatics.
Crawley Court Results 2021,
Global Fellows Internship Program Usc,
Worcester Telegram Obituaries,
Why Does Crab Meat Stick To Shell,
Articles A
Posted by on Thursday, July 22nd, 2021 @ 5:42AM
Categories: android auto_generated_rro_vendor