Machine-Learning Projects

Computational Attestations of Polynomial Integrity

DP Training Convergence

A project combining differentially-private machine learning with quantum-secure, non-interactive cryptographic proofs of computation. Leveraging the RISC-Zero zkVM, this system ensures training integrity in MLaaS (Machine Learning as a Service) settings while maintaining privacy.

This approach processes datasets, trains models, and generates zero-knowledge receipts to verify computation honesty without revealing sensitive data. Supports both CPU and GPU (CUDA) environments for scalable and high-performance execution.

Watch the video walkthrough

Experiments in Computer-Vision and Autonoumous Vehicle Control

Self-Driving Python SuperTuxKart Agent

Develops a fully autonomous self-driving agent capable of navigating a simulated environment by utilizing cutting-edge advancements in computer vision and sophisticated dataset generation techniques. Demonstrates impressive accuracy even with a relatively modest dataset, maintaining reliable performance with only occasional errors. Highlights the exceptional potential of streamlined and elegant neural network architectures to effectively capture and model intricate patterns within the training data.

(This is an accepted solution to a graduate school assignment created by Dr. Philipp Krähenbühl of the University of Texas at Austin. Out of respect for Dr. Krähenbühl and his future students, a link to this solution is ommitted.)

Differentially-Private Domain Generation Algorithm Detection

This project explores the creation of differentially-private machine learning models for detecting domain generation algorithms (DGA). By leveraging TensorFlow-Privacy and Keras, the work ensures that trained models reveal no information about their training data, satisfying strict privacy constraints.

Experimental results across CNN, MLP, and LSTM architectures demonstrate the trade-off between privacy and model utility, with accuracy improving under optimized conditions. Future work aims to enhance performance via vectorized approaches to differential privacy for efficient use of parallel hardware.