Research

Overview

I’m passionate about Probabilistic Machine Learning, Neural Network Theory, Natural & Artificial Intelligence, Data and Simulation Science, and applications in Computational Biology, Medicine, and Bio/Neuroengineering.

  • I build and analyze agent-based models—recurrent neural network (RNN) controllers trained with deep reinforcement learning—to study natural behaviors and the neural dynamics that generate them. Like digital twins, these models enable rapid hypothesis testing in silico while maintaining full observability.
  • I develop theoretical and methodological tools to interpret, compare, and constrain RNNs and DeepRL-trained agents using ideas from dynamical systems theory, neuroscience, and AI interpretability.
  • I apply the broader machine learning, simulation, and data science toolbox to problems in health, biology, and medicine, including applications in mental health, cardiology, and genomics.
  • For a complete list of my publications and patents, visit my Google Scholar profile.

🧠 Agents for Neuroscience and Natural Behavior (Digital-Twins)

Goal: Use deep reinforcement-learning (DeepRL) agents as computational models of animals, to uncover how intelligent behavior and neural computation emerge from experience and embodiment, and leverage this understanding to cross-pollinate ideas between AI and neuroscience..


⚙️ Theory of Neural Networks & Agents: Dynamics, Degeneracy, Trust and Interpretability

Goal: Build a theoretical foundation to interpret, compare, and constrain Deep RL–trained RNNs through dynamical-systems and neuroscientific principles, advancing their trustworthiness, stability, and interpretability.

  • Solution Degeneracy in Task-Trained RNNscontrol & quantify many-to-one solutions
    Huang AH, Singh SH, Rajan K. (2025) NeurIPS 2025 (Spotlight)

  • InputDSAdemix recurrent vs input-driven dynamics
    Huang AH, Ostrow M, Singh SH, et al. CoSyNe 2026 (Spotlight Talk), ICLR 2026 (accepted) ArXiVDemixing then Comparing Recurrent and Externally Driven Dynamics

  • Geometry of Neural Dynamics in Controllerslearning-dynamics lens on RNNs
    Huang AH, Singh SH, et al. (2024) RLC’24 Interpretable Policies Workshop (Talk)paper, TalkRL podcast


🧬 AI for Biology and Medicine

Goal: Reliable, interpretable AI and simulation for clinical & biological questions.

  • Naturalistic Motor Neurosciencelong-term neural + video; behavior mining
    Wrist Motion
  • AI for Mental Health — RACERauditable LLM analysis of semi-structured interviews
    EMNLP 2024 NLP4Science Workshoppaper

  • AI in Cardiologyensemble based uncertainty quantification for pediatric ICU arrhythmia detection
    Heart Rhythm O2 (2024)PubMed

  • Multi-Omic Cancer Progression (Myelodysplastic Syndrome)liquid biopsy marker progression for MDS
    Experimental Hematology (2024 abstract)link

  • GRN Evolutionsimulation study: promiscuity drives adaptation in GRNs
    ALIFE 2023“Binding Affinity Distributions Drive Adaptation in GRN Evolution” paper

  • Cerenkovpositive + unlabled supervised learning for noncoding regulatory variome
    ACM BCB 2017paper

🤖 Probabilistic and Bayesian Machine Learning

Goal: Uncertainty-aware and interpretable ML for real-world decision-making.

  • Meta AI (Probability Team)Uncertainty Quantification & Bayesian experimentation (2022)
    Topics: Deep Ensembles, MC Dropout; Bayesian online experiments via Bean Machine
    Team: Meta Probability

🧩 Other Projects and Patents