I'm a former ML engineer turned full-stack developer. I used to work a lot with Graph Neural Networks and NLP models. As a web developer I'm most competent in TS, React, Express (with Node), and Postgres.
I'm an occasional writer ✍ and a frequent reader 📖. I'm an avid piano player 🎹 and an abysmal frisbee tosser 🥏.
Given free time, I pander to and moderate the (small but growing) Geometric Deep Learning subreddit.
Most recently I've been working with my friend and co-founder Albert Wang on Comend, a social platform for caregivers. We want to build a safe and productive place for people to talk about their health openly.
I recently started releasing original piano compositions on YouTube and Spotify.
For the latest updates on what I'm doing, my setup, or just to dig a bit more about me, visit my blog 📝.
Entrepreneur First (EF) - Founder in Residence
March 2022 - Feb 2023 - Toronto, Canada
Kebotix Inc. - Machine Learning Developer Intern
Sept 2020 - May 2021 - Boston, MA
Relation Therapeutics - Machine Learning Research Intern
Sept 2019 - May 2020 - London, UK
ML for Chem/Bio-informatics - Personal Projects
Oct 2018 - June 2019 - Toronto, Canada
Canadian Imperial Bank of Commerce (CIBC) - Front end Developer
June 2018 - Sept 2018 - Toronto, Canada
Jake P. Taylor-King, Cristian Regep, Jyothish Soman, Flawnson Tong, Catalina Cangea, Charlie Roberts
DDD allows for the fitting of continuous-time Markov chains over these basis functions and as a result continuously maps between distributions. The number of parameters in DDD scales by the square of the number of basis functions; we reformulate the problem and restrict the method to compact basis functions which leads to the inference of sparse matrices only -- hence reducing the number of parameters.