A priori → a posteriori captures how intelligent systems learn and evolve—where assumptions are tested and often times refined by evidence. In many ways, human learning mirrors this as if world is a vast Bayesian network, continuously updating beliefs as new facts emerge.
This philosophy has shaped my path. I was first exposed to AI through early work in unsupervised learning for face recognition over two decades ago, though much of my career since then has been across application development, middlewares, and cloud-native systems. More recently, as the field evolved, I chose to revisit AI more formally—earning a master’s degree in computer science and strengthening my understanding from first principles.
Presently, I work as an AI/ML Engineer at Google Cloud in Sunnyvale. I partner with cybersecurity leaders, frontier labs, and AI-native startups in SF/Bay Area and help them scale GenAI, Core ML, and agentic workflows for developer and employee productivity. I’m particularly drawn to agentic systems, where classic software engineering patterns are adapted to manage AI’s non-determinism and ensuring robust eval.
Outside of work, much of my time revolves around my son, Neiv who is growing up faster than I can keep track of. I’ve largely stayed off social platforms, though I’ve recently begun sharing thoughts on Twitter/X. I also enjoy travel photography—some of which will find its way here.