Research
Our research focuses on core foundational challenges integrating mathematical tools with real-world objectives to advance the state-of-the-art. We pursue ambitious use-inspired research, targeting frontier perceptual tasks in video, imaging and navigation.
Research Thrusts
Algorithms and Optimization
A central mission of IFML is to push the boundaries of algorithms and optimization for essential tasks in both supervised and unsupervised learning. Our work spans new methods for training neural networks and other machine learning models, especially in settings with noisy or imperfect data. We focus on advancing gradient-based techniques, developing adaptive optimization strategies, and establishing theoretical lower bounds that define the computational limits of current algorithms and neural architectures.
Foundation Models
Foundation models have transformed the landscape of AI research and development. Trained on massive and diverse datasets, these models make it significantly easier to build specialized systems using minimal additional data. They can be prompted directly for new tasks, including through in-context learning, where a few examples guide the model’s behavior. They can also be fine-tuned on domain-specific data or used to generate synthetic datasets for training new models.
Diffusion
Recent years have seen remarkable progress in image generation, led by models like Stable Diffusion and DALL·E, which can create high-quality images from open-ended text prompts—like "an astronaut riding a horse." This leap is powered by diffusion models, a new class of generative systems that outperform earlier methods like GANs. Their strength lies in a stable training process using simple regression loss, a step-by-step refinement approach, and solid theoretical underpinnings that make them well-suited for future applications.
Robustness and Safety
This research thrust tackles foundational challenges in the safe and reliable deployment of generative AI systems. A major barrier to applying AI in high-stakes domains like healthcare is distribution shift — how can we trust a model trained on one population to perform well on another? We explore how to interpret complex models to better understand their decisions, assess robustness to noise such as data poisoning and adversarial inputs, and develop methods to enforce guardrails that align model behavior with human intent.
Use Inspired Applications
Key use cases include the integration of mathematical tools to advance breakthroughs in medical imaging, protein engineering, automated theorem proving, and open-source AI. Building on research themes initiated in 2020, we have since refined our focus to emphasize use-inspired research that aligns with our foundational thrusts and fosters multi-institutional collaboration. These efforts target areas where generative AI can drive global impact—enhancing medical care, accelerating vaccine development, expanding access to AI, and advancing mathematical and scientific discovery.