About Me
Welcome to my blog! I'm Dominick, and this is my personal blog. I love data, and learning new things and new ways of looking at the world.
Professionally, I am working as a Senior Engineer, developing API integrations, automating reporting, and configuring and deploying cloud services. I'm lucky to be on a team with awesome and dedicated people, where we create solutions for a whole range of SaaS, PaaS, and legacy system challenges.
Day-to-day, I get to use a lot of great technology like Python, SQL, Spark, and Airflow for data processing and orchestration. I work extensively with modern data platforms like Databricks, Snowflake, and Apache Iceberg for building data lakehouses. For performance-critical data transformations, I leverage libraries like Polars and PySpark.
I specialize in healthcare data integration, with a focus on solving critical business challenges. In my work with LHCSAs and certified home health care agencies, I built data solutions that directly addressed issues like delayed payments and operational inefficiencies. I also have hands-on experience managing the technical infrastructure, having built and maintained a data platform that integrates APIs from over 2,000 devices to provide real-time support desk metrics.
My projects involved creating robust systems to handle a wide range of data formats and standards, from on-premises legacy systems using EDI transactions to modern cloud-based solutions leveraging FHIR. I have specific expertise in the technical implementation and management of HHA Exchange, which was central to improving the flow of payment data.
I often implement a Medallion Architecture to transform raw data from a bronze layer into refined, analytics-ready tables in the gold layer, ensuring data quality and reliability. By using an Event-Driven Architecture and Microservices, I can build highly scalable data services that process information in real-time, helping to unlock key business improvements. My focus is always to create the best integration, remaining agnostic across on-premises, hybrid, and cloud deployments while leveraging platforms like GCP, Azure, and AWS to build scalable solutions.
GPU / CUDA work
Recently I've been exploring GPU-accelerated data processing to speed up ETL and analytics workloads. See my write-up "Getting -70x Data Processing Speed with GPU vs CPU" for benchmarks, links to the CUDA Toolkit, and recommended tooling such as RAPIDS and Triton. If you'd like to reproduce my environment locally, I include example conda commands and a RAPIDS-compatible environment in that post.
Interactive Housing Simulation (React)
I built an interactive housing market simulation as a small React app embedded in this site. It demonstrates agent-based dynamics and lets you tweak parameters to see how supply, demand, and simple policy changes affect outcomes. The simulation lives in src/components/HousingSim/HousingSimulation.jsx and is mounted on the page Housing Simulation. You can also view the component source on GitHub: HousingSimulation.jsx.
To run the simulation locally: clone the repo, install dependencies, and run the dev server. The simulation uses the Astro + React integration and is mounted with client-only React hydration so the UI is interactive in the browser.