A priori → a posteriori captures how intelligent systems learn and evolve—where assumptions are tested and often times refined by evidence. In many ways, human learning mirrors this as if world is a vast Bayesian network, continuously updating beliefs as new facts emerge.
This philosophy has shaped my journey. While my exposure to AI began with unsupervised learning for face recognition over two decades ago, my focus had been largely across applications, middlewares, and cloud-native architectures. Inspired from the recent development in AI, I went on to finish grad school while working full time, earning a masters degree in CS and deepening my understanding from first principles with rigor.
Today, I’m an AI/ML engineer based in Cupertino and I partner with cybersecurity leaders, frontier labs, and AI-native startups in SF/Bay Area and help them scale GenAI, Core ML, and agentic systems. I’m particularly drawn to agentic systems, where classic software engineering patterns are adapted to manage AI’s non-determinism and ensuring robust eval.
Outside of work, much of my time revolves around my son, Neiv who is growing up faster than I can keep track of. I’ve largely stayed off social platforms, though I’ve recently begun sharing thoughts on Twitter/X. I also enjoy travel photography—some of which will find its way here.