My Background

I'm a passionate lifelong learner and self-taught developer with a background in data science aiming to use my technical skills to benefit the world. I currently work in the Emerging Market Economies section of the Federal Reserve Board's International Finance division, concurrently completing my M.S. in computational data analytics at Georgia Tech from August 2022-August 2024. I finished undergrad in May 2022 from Claremont McKenna College, graduating summa cum laude with a degree in physics and applied mathematics. Check out the Professional Experience section below for some highlights of the work I've done in a professional context.

Currently, I am focused on developing my skills in Go, as well as my knowledge of computer networking and web development. In my spare time, I enjoy cooking, reading, chess, video games, the occasional (read: frequent) YouTube binge, learning, and building new things!

Technologies

Backend & Scripting Languages

Python Go R Scala Visual Basic

Web Development

HTMX Svelte Dash JavaScript PHP HTML CSS

Data Science & Engineering

PostgreSQL Hadoop Spark NetworkX Pandas/Polars Scikit-Learn PyTorch Tidyverse

DevOps

Docker AWS GCP Databricks

Data Visualization

Matplotlib/Seaborn Plotly d3.js ggplot2

Professional Experience

Much of my experience to date has been maintaining and improving large legacy systems. Over the years I've developed a solid framework for tracking down key code components through data flows, function calls, and understanding historical naming and/or design conventions.

Having spent most of my career working in cross-functional teams, I've honed my ability to communicate with stakeholders of varying technical expertise. I understand that different audiences require different types and levels of details, and shift my communication accordingly.

I also develop internal tooling, including a fullstack Emerging Market macroeconomic data release calendar application, internal libraries, and command line utilities. I've also built tools to help display and manage job scheduling and load balancing for recurring visualization packets, pipelines, and dashboards.

Finally, I have experience in building data pipelines through the whole stage, from vendor negotiations and data ingestion through generating bronze, silver, and gold tables for use production processes. In doing so, I've implemented improvements to my section's seasonal adjustment pipelines through extensive research into X-13. I've worked with large (> 100 TB scale) data, and in the process have learned how to effectively leverage high-performance distributed computing environments like Spark.