View me on GitHub

Michael Bock's Github Page

About me

I'm in the class of 2022 at Virginia Tech. I'm currently a Computer Science major. I've chosen to major in Computer Science because I enjoy solving difficult problems with code and I think becoming an engineer will allow me to make a positive impact on the world.

Skills

Infinitam

Microsoft Imagine Cup 2020 team. The team is working on an app on iOS that evaluates the health of bee hives. The main mission behind the project is to provide an easy way to evaulate hive health to hobbyist bee keepers.

VT Baja Testing Team

I help program the sensor suite for the Virginia Tech Baja 2020 Car. Specifically, I coded the data reading software in C++ for the Intertial Measurement Unit(IMU), Linear Potentiometer, and several other sensors. I also helped make software to write data from the sensors to a CSV file.

RLBot

Over the past year I've made a bot that plays Rocket League, which is a video game where cars play soccer. I just finished my second attempt, named "koramund", and I am now working on a third attempt that will employ reinforcement learning to optimize movement around the field. More information on Rocket League Bots can be found at rlbot.org. I will also post more information about both koramund and my new bot on this website.
If you want to see koramund in action against the some of the world's best rocket league bots, check out the Lightfall tournament held October 2019:

LoCurated

LoCurated is a local delivery business I helped run with two of my friends. My role is to design and implement the algorithm that tells our drivers what order to deliver packages in. There are a lot of ways to route drivers around Blacksburg and Christiansburg, so I'm almost always thinking about different ways and new algorithms to increase our business's throughput.

VisAR

VisAR was a startup that aimed to create an alternate world of augmented reality objects. My role was to lead the team which created the backend, which handled the loading, storage, and placement of objects in augmented reality. One core challenge my team tried to solve was monocular depth estimation. This is the problem of trying to see in 3D with a single camera. To solve this our team tried to implement SLAM, which takes advantage of a moving camera to see depth. We also implemented a solution which placed objects on a QR code, however this was not our end goal. The team also worked on created an S3 bucket on AWS to store files for augmented reality objects and used CloudFront to distribute our files quicker.

Undergraduate Research Projects

I did 2 undergraduate research projects throughout my time as a student at Virginia Tech. The first was on a project in which my team created a poly-time algorithm to solve the subgraph isomorphism problem. The subgraph isomorphism problem is a problem where two graphs G and H are provided and an algorithm must decide whether or not G is a subgraph in H. This problem is NP-Complete, which menas that no one has found a poly-time algorithm to solve it. Our algorithm achieved poly-time performance using quantum annealing. Quantum annealing is an optimization technique where qubits are cooled down to reacha minimum energy state. The task for an algorithm that uses quantum annealing is to write a Q matrix, which represents the problem you are tring to solve. The algorithm constructs a Q matrix by using a penalty function to ensure that only the correct answer is represented by the minimum energy state produced by the quantum annealer. We sought to use our subgraph isomorphism solution to find vulnerabilities inside call graphs, which are graphs of which functions are called during the execution of a program. However, call graphs produced even for small programs were too large to fit on a quantum computer. The journal article detailing our subgraph isomorphism algorithm is in review The other project I did was a project where my team was tasked with designing a lighter than air aircraft that played soccer. Here are the rules of the soccer game the aircraft had to play: several green balls are scatter about a field. There are three goals on either side of the field that each team must defend. When a balloon drives one of the balls through one of the goals, its team is awarded a goal. The team with the msot goals at the end of the game wins. The game alternates between manual control phases, where teams control their balloons with a controller and autonomous periods, where all balloons are working without human intervention. I worked primarily on the controls of the balloon. I implemented(or at least tried to implement) a PID controller that kept the balloon steady and I wrote the code for our autonomous search pattern that tried to get balls in the view of a camera on the front of our balloon. I also worked on code that automatically closed a grabber on teh front of our balloon when it encourntered a ball(there are no handballs in this game so we just grabbed the ball rather than "kicking" it). We programmed our aircraft controls using ROS and C++.

Victor Tango Autodrive Perception Team

Victor Tango Autodrive is Virginia Tech's team in SAE's Autodrive Challenge, where teams from 10 universities compete to make a self driving car. I worked on the perception subteam, which creates a system that senses and interprets the car's surroundings. I worked on our Maps subsystem and our Computer Vision subsystem. The maps system uses a GPS and an HD Map of a test track to lookup the car's surroundings. Specifically, the subsystem looks up the car's lane, the car's offset from the center of its lane, and the current speed limit. The team is currently working to update the system to also record things like road curvature and tangent, which are calculated at the time I'm writing this, but not published. First, the system uses SWRI's novatel gps driver to find the car's position. The main node(written in ROS) is the localizer node. The localizer's job is to look up the car's lane in the HD Maps database. The HD Maps database is a PostgreSQL database. The localizer iterates through the HD Maps lane table to find which lane it is in. It then looks up the speed limits and computes the offset from center. The Computer Vision subsystem consists of many parts, and I didn't work on them all. I worked on the nodes which used neural networks to locate signs, traffic lights, and obstacles. Other nodes included the node which output images from our camera, an image crop node, and calibration nodes. Understanding these nodes is ciritical to understanding the nerual network nodes. The camera, image crop, and calibration nodes work with our lidar system to create a bounding box around areas of interest within the camera image. We used YOLO v3 with Python for the traffic light detection. The traffic light detector works to determine whether a traffic light is red, yellow, or green. We also used Yolo and Python for the obstacle detection, however we used Open Vino(Intel's way of distributing pre-made neural networks) for the obstacle detection. The system I'm more proud of is the sign detector. We used a somewhat unconventional solution - rather than a neural network that actually detects signs, we simply read the text off signs to classify them. For example, in the US every stop sign says "STOP" on it. We simply read this text and pass it on. The reason this works well is because of the CV's interaction with our lidar system. Our car is "lidar first." This means that before any computer vision happens, our lidar crops the image down to only where there are signs. This way, we don't accidentally read something that isn't a street sign.

Other projects

These are some smaller/stepping stone projects that I've done