My name is Benjamin Smith. I am an avid computer science professional based in the Toronto area. I am currently a Tech Lead Manager at Google Waterloo, where I lead a team of engineers in building tools and infrastructure to keep our products fast and responsive.
I have previously worked as a manager, software engineer and researcher across the fields of artificial intelligence, visualization, medical image analysis and distributed systems. I am also passionate about graphics and game design, and regularly participate in the Toronto Game Jam (TOJAM) as well as tinker in my spare time.
This site contains a sampling of projects and publications I have been involved in. These are mostly outside of my professional activities. Feel free to connect if you would like to know more about any of these areas!
For our 2016 entry into the TOJam game marathon, we tried our hand at a VR game. We had access to an Oculus Rift and a Gear VR, and created a zombie-survival game. The objective was to make it through a maze-like environment without being caught by zombies, using a time limited flash-light for visibility.
Due to the time constraints, we used available 3D characters + animations from Mixamo. This was my first time exploring Unity's character animation / blending system, as well as creating non-trivial behavior trees for the zombie AI.
The VR pieces of the project were relatively straightforward, and the main challenges were around playability and motion sickness in VR.
The project was a great success, both in terms of something neat to play, and getting exposure to a handful of new tools and technologies.
Technologies used: Unity, C#, Rain AI, Mecanim, Oculus SDK
The upper image is a still from a verlet physics simulation of a Flamingo marionette. This was a project I created as a warm-up exercise to learn the XNA framework, which can be used to create XBox 360 games. It was also a refresher on C#.
The marionette simulation was a subset of the Marionette "Street Fighter" game I created in my undergrad many years before (lower image). I no longer had the original code, so I started from scratch.
Instead of hard coded "programmer art", I modeled the Flamingo using Blender, tagging the relevant rope attachment points to integrate with the simulation.
I also incorporated a toon shading effect. The project went well, and the marionette was quickly up and running on the XBox 360.
I still think the elegance of the verlet integration in this type of physics model is very cool!
Technologies used in updated simulation: C#, XNA 4.0, Blender
Technologies used in original game: C++, OpenGL, SDL
In 2014, Epson held the "Develop the Future" contest: inviting participants to pitch AR demos and ideas for their Moverio AR device. I created a short demo to illustrate how AR could create an illusion of hidden space, by rendering a cut-away on top of a QR code target
The cut-away effect used a stencil buffer to render a "hole" and then a stencil test to only render the interior when the stencil buffer was non-zero. This created the illusion of a hole in the wall which could be looked into. The entire graphic was rendered over a QR code on the wall, recognized with the Vuforia SDK.
Additionally, the Moverio did not ship with proper stereo rendering at the time. I was able to create side-by-side stereo in Unity with a 2 camera setup. It was necessary to go through an interesting math / trig derivation to determine the correct asymmetric view frustrum for stereo (as opposed to toe-in / cameras aimed to converge on the scene, which causes visual discomfort)
Technologies used: Unity, Blender, Vuforia AR SDK, Moverio BT-200
For our 2017 TOJAM entry, we wanted a simple game mechanic, and decided on a rails-style game that requires the player to define a path through an environment and then proceed along it without being spotted by NPCs. The only control the player is allowed is to stop / resume the path. We built an Office Space style theme around the mechanic in which the player attempts to escape from work early.
I had a lot of fun with this project. I went a lot deeper into RainAI to build the behavior trees for the NPC AIs. We also required an "interaction engine" to manage the behavior when 2 characters encountered each other, which was something I had not build before.
We used freely available animations from Mixamo for the characters actions but I created all the 3D character and object models using Blender. I had previously done 1-2 simple models, but creating 3D assets at this scale was a new experience and I became a lot more comfortable in creating objects that were visibly appealing and could be animated without artifacts.
Going forward, I'd like to polish this prototype, and see how much fun we could make the mechanic.
Technologies used: Unity, C#, Mecanim, RainAI, Blender
This is an ongoing project to remake an old 80s ASCII game: Pyro II. One of the interesting technical challenges of this game is that the levels can be completed destroyed in-game. This was simple in ASCII, where the graphics are 2D blocks, but more difficult in 3D!
The lower images show the isometric remake I first attempted, which required a lot of care to piece the scene tiles together properly! At the time, real-time particle systems were prohibitive, so I was pre-rendered as much as possible for in-game animation and played back as sprites.
More recently, I have been working on a true 3D version using Unity, shown in the upper images.
In both versions, I recreated the basic game mechanics: the building walls burn away, simulated gas puddles flow over the floor, explosions are created from ignited gas cans, and flames chase after you on a burning rope.
Technologies used: Unity, C#, C++, OpenGL ES, cocos2d-x, Blender
This was an interesting foray into 'edu-tainment'. I was invited to a conference to talk about some of the new algorithms and approaches we were investigating for picking at Amazon. To help "hook" people, and illustrate the challenges of picking, I developed a tablet game where the player performs an optimization task that is normally automated. You can still play it :)
This game needed to be done on a short time frame, on my own time, so I purposefully kept the graphics simple. The biggest challenge turned out to be playability: the real world situation has too many variables to easily control. I iterated several times on the core idea, continuously simplifying until the game was playable and fun, but still illustrated the core concepts.
Technologies used: Javascript, AWS, Impact.js, Scoroid
In the lead-up to the 2017 TOJam, I was trying to identify a unique visual style we could use that would also reduce the time + effort required for the team to build art (as none of us were great artists)
As a result, there was a period of 2 weeks where I ran through a number of fast prototypes for different ideas. I have included them here because I found them quite interesting. In the end, we went down the path of simple stylized 3D models which were slightly more detailed 'stick people'.
Stop motion: The first experiment was to try and create 2D sprite animations from stop-motion animation captured on a green screen. I build a photo box / green screen setup to obtain uniform lighting, and wrote GIMP scripts to process the images to remove background. This worked, but I ran into difficulties with different sized objects and overlaying them at the right scene depth. I tried a few things to work around this, including writing a python utility to acquire depth information from a Kinect camera, but ultimately decided this style wasn't viable for a short-term project.
Card Rendering: The second experiment was to create a sort of 'card' character by rendering a character model onto perpendicular planes and using a 3D shader to add a thick 'paper' border. The visual effect was neat, but did not work well when the character was animated, as the perpendicular views appeared disconnected at the limbs. I ruled it out for that reason, but this would be an interesting one to revisit, possibly with multiple planes in each direction to improve the perception of synchronized movement.
2D paper puppets: The final experiment was to create 2D jointed puppets using static 2D images. At first, I did this by creating a 'pastiche' of different magazine images, which created a very cool style. Unfortunately: it was also very time consuming to create. An interesting direction: with a large enough source image library, the pastiche process could be automated. As a variant, I hand drew some static pieces to create the puppets, but did not end up going in this direction.
Technologies Used: Unity, Python, Kinect for Windows, GIMP, Blender
In May of 2011, I participated in TOJAM, with a group of CS inclined friends. We created "Feed Willy": guide a whale through a sunken city, catching fish and keeping the whale's oxygen level up by swimming through geysers of air.
The game was written in C#, with the XNA 4.0 framework. I wrote a quick keyframe animation engine, using blended quaternions to animate the whale. I rigged an existing whale in Blender, and created the swimming motion necessary. I also wrote swarming logic for the fish, and build a particle system.
Our decision to build a 3D engine from scratch made this a huge endeavor, but it was a ton of fun. At the end of the weekend, we had a nice little game to show off.
Technologies used: C#, XNA 4.0, Blender
In 2012, we participated in our second TOJam weekend game competition. We used the Unity engine in order to move fast. We came up with the idea of a platform game that took place on a rotating set of concentric circles. The player could move from outer to inner circle, dodging obstacles. At the end of TOJam we had a functional multi-player game with cannons, volcanos and ramps to jump off. It was a blast!
I created character models and animation using Blender to create the 3D playable character. I also designed and implemented level generation. tools. The levels were based on concentric circle, so it was possible to generate the geometry mathematically, including 'blended' circles that served as ramps between inner and outer rings.
Technologies used: Unity, C#, Blender
One of the assignments in the University of Waterloo's computer graphics course is to create your own raytracer.
There is a base set of features to implement, and at least one non-trivial extension. I had a lot of fun with this assignment and ended up implementing several: reflection, shadows, anti-aliasing.
The final ray tracer is show-cased with a custom demo scene, shown on the left. I particular liked the "hall-of-mirrors" effected created by the reflective sphere and the frozen pond. I later got to be a TA in a similar graphics class at SFU, and help others to develop their own raytracer projects.