I am not looking for a job: I am very happy working at Google, and I can’t imagine a better place to be as a software engineer. The opportunity to work with really amazing software developers, on systems of previously unimaginable scale, is a programmer’s dream. It has also been a humbling experience: I’m a good developer, but I didn’t know, until I started working at Google in 2010, what world-class software development looks like. I’ve learned a lot, and I’m still learning.
But life takes funny turns sometimes, so I try to try to keep my résumé up to date. And I even spell résumé correctly most of the time, but also spell it as ‘resume’ or even ‘CV’ so search engines can have an easier time of it.
I can be reached by email at .
The online version of this résumé can be found on my website at : http://www.brayden.org/resume.html
Why You Might Want To Hire Me
- My projects are almost always successful: they are more or less on time (usually, but with a few painful exceptions), and more than meet the customer’s expectations.
- I am not a one-trick pony: I use the tools, languages, and techniques suitable for the problem at hand. Your experience may differ from mine, but when I’ve interviewed people whose only trick is using java and some set of java frameworks, it’s an almost sure bet that they don’t have a clue how to solve actual problems, and will be stumped by anything that doesn’t fit in their limited world-view. I do not suffer from that syndrome.
- I have focus: when I embark on a project I see it through, and do a pretty good job of fending off distractions, unnecessary features, premature optimizations, and the rest of the baggage that cause project delays.
- Despite what you might see here (this is a résumé, after all), I keep my ego in check. If a coworker has a better way of doing something, I am more than happy to abandon my way. Of course, I’m also not shy about letting coworkers know that they are doing it wrong, when they are.
- I provide strong technical and project leadership and I am able to motivate and lead other engineers to provide their best effort.
I am presently employed at Google as a senior software engineer on a site reliability engineering team responsible for the source control, build, and test systems. My main focus has been on the development of tools to test and measure those systems to ensure that they are reliable, efficient, and fast.
Previously I worked at Amazon.com for 3 years as a senior software development engineer on the infrastructure automation team. My responsibility was to develop tools to simplify and automate the management and configuration of network devices in Amazon’s datacenters. I also worked for a time on the third-party selling platform, making contributions to the new-user registration pipeline and the fee calculation service.
Before that I was at Guidant Corporation for 4 years as a senior software developer, developing systems and applications for manufacturing automation.
Prior to that, I was chief engineer and senior engineer for two small companies, Systematic Designs and Object Engineering. My contribution was key to the survival and success of both companies. In both cases I proposed, designed, and led the implementation teams for the companies’ core software. During that time, my specialty was the design and construction of high performance distributed applications using a variety of software technologies including COM, CORBA, SOAP and other XML, and various message-bus software.
- 2010-present: Google, Inc., Seattle, Washington
- 2007-2010: Amazon.com, Seattle, Washington
- 2003-2007: Guidant / Boston Scientific, St. Paul, Minnesota
- 1991-94, 1995-2003: Systematic Designs, Inc., Vancouver, Washington
- 1994-95: Object Engineering, Inc., Vancouver, Washington
- previous: self-employed and Tektronix, Portland, Oregon
Languages, Platforms, and Tools
If you are a human you can skip this obligatory list of keywords. If you are a search tool, chow down!
- Languages I’ve used in the past year
- Languages I’m still comfortable using, despite not having used them this year
- Languages I’ve used before but hope never to use or hear of again
- Language-like thingies I’m pretty good at, and which always show up on these lists
- html, css
- Languages I intend to test drive in the next year
- OS’s that I like to develop on
- ubuntu linux, ChromeOS (because it’s really linux)
- OS’s that I rather like using but don’t really want to develop on
- OS’s that were really cool in their time but are now a niche, at best
- Development environments that I use
- emacs + command line
- Development environments that I use but don’t like very much
- Development environments I’ve used in the past and really liked but which don’t fit my current needs anymore
- Visual Studio
- Database systems I’ve used and enjoyed using
- postgresql, sqlite
- Database systems I’ve used because I had to for some reason
- Oracle, MySql, SQL Server, Microsoft Access
- Database-like thingies I’m using now
- Source control tools
- perforce, Visual SourceSafe (remember what I said about using Visual Studio?), subversion, git, rcs (yes, I’ve been around that long)
- Frameworks that I find overblown and dangerous, and please don’t ask me to use them, or at least tell me during the interview that I must, so I can just say no and we will both be happier
- Spring, Hibernate
- Frameworks I’ve used that are works of genius and things of beauty
- jQuery, sinatra
Good/Not Good: Evaluating Software Releases
The problem: you have a system that runs on tens of thousands of machines, with unbelievable data rates in and out, and with complex interactions among multiple serving and caching systems. Sometimes a new release goes out, and you don’t discover until it’s been fully deployed that error rates or latency have increased, or that throughput has decreased: the release has to be rolled back. So: how to prevent this?
My solution consisted of the following:
- Created a load-testing framework to apply load to a scaled down version of the system (hundreds of machines).
- Run load tests throughout the day, alternating between testing the current production software and the new release candidate.
- Collect all the metrics possible from the system while the load tests are running - thousands of variables in this case, collected every minute.
- Apply a set of statistical tests to the collected data to identify variables that are different between the new release and the production software.
- Expose the statistical metrics analysis via a dashboard that can be reviewed prior to release, with all differences highlighted.
The results have been good. Numerous releases have been stopped before going to production because of sometimes subtle performance regressions. Release decisions are now in the realm of science rather than fortune telling.
Distributed Control of Manufacturing Equipment
The problem: you need to write software to control and collect data from many (> 40) types of manufacturing equipment. They have similar communication interfaces, but widely varying control requirements and capabilities. You would like to complete the project successfully, and make some money.
My approach to the problem was:
- Recognize that this is an instance of cooperating sequential processes.
- Design and implement a language that allows the direct expression and execution of Harel state machines. Not some sort of code generator, but a language in which the state machine structure and the code tied to states and transitions are woven together, with a mostly familiar syntax.
- Integrate that language with standard debuggers.
- Write good documentation on the language, and provide real-life examples.
- Mentor the rest of the team that will be developing the actual equipment interfaces using the new language.
- Code, test, debug, deploy.
- Bask in the glory of a successful project. Bask! I say, bask!
This language continued to be used for many years on a large variety of projects.
Automate a Process, Win an Award (sort of)
The problem: you are making batteries for implantable medical devices (pacemakers and defibrillators). Tolerances are tight, traceability requirements are rigorous, and the cost of mistakes is patient death. You have to measure the before and after weight of a device at widely separated process steps, and calculate the delta weight to be sure the processes were done correctly.
- Write software that attached to the measurement equipment, to the corporate traceability system, and to a measurement storage system.
- Provide interfaces for barcode and RFID input from the measured devices.
- Figure out from the traceability system what process step the device is at.
- Collect the measurement (the weight), store it to the measurement storage system, and report to the traceability system, computing the delta beforehand if the device is at the 2nd or greater measurement step.
- Provide a spiffy UI for the equipment operator.
- Create a test suite with every edge case imaginable. Make the tests repeatable so they can be run prior to any new release.
This system won the company President’s Award - 2 months after I left the company. So I got some reflected glory, but missed out on the thousand bucks.
As an aside: the scales used in that process are amazing. If you take a sheet of paper, and cut off a tiny corner of it, and place that tiny scrap of paper on the scale, it will register, with precision to .001 gram and accuracy to .005 grams.
A Tedious Timeline of Projects, Big and Small
This section is also known as “bunches of stuff I’ve done so you don’t think I’ve been sitting on my hands for two decades”, and “fodder for that opening interview question, where the interviewer feigns interest for no more than 3 minutes in something you’ve worked on.”
Warning: acronyms that you’ve never heard of abound in what follows.
2014: wrote a tool (in go) to do near real-time latency analysis for a large pipelined service. And made changes to that service to improve its thruput and reduce its resource footprint.
2013: wrote a much more sophisticated metrics analysis engine capable of doing metrics evaluations of thousands of variables with many variable samples, and providing clear signals of variation between samples.
2012: wrote a load testing and metrics collection framework for a very large distributed build system, and designed and implemented methods of detecting even slight performance or error degradations in new releases of that system.
2011: wrote a load testing framework for a distributed source control system, and improved a performance measurement dashboard for that same system by adding graphs and other diagnostics for probable performance regressions.
2010: joined Google. Wrote lots of stuff. Got readability in c++ and python (kind of a big deal).
2009: created a domain specific language and a service for auditing the fees charged to 3rd party sellers, to detect problems caused by incorrect fee configurations. It happens.
2008: key member of a team that built a test-lab for the infrastructure automation load-balancer tools stack. This included construction of a test-dashboard, and modifications to the tools stack to allow test deployments to run in isolation from the production stack.
2008: designed and implemented an object model for representing load balancer configuration at a high level, together with methods to compute differences between configurations and to update a load balancer with those differences.
2007: implemented a domain-specific language for defining a database schema and generating Oracle DDL, Java api, and Hibernate configuration from a single source.
2007: developed a prototype web-based application using Ruby on Rails to perform automatic collection of manufacturing traceability data.
2007: developed and implemented a system for parts tracking and temperature tracking within an automted burn-in oven. This and the next project (below) were later the winners of a corporate “President’s Award”.
2006: designed and implemented software to automate data collection from manually controlled measurement instruments.
2006: designed and led the implementation of a system for automatically collecting equipment process data, uploading that data into a secure database system, and exporting configurable subsets of that data to a statistical process control system.
2005: designed and implemented a system for integrating RFID into an existing manufacturing process.
2004: designed and implemented a system for performing automatic collection of manufacturing traceability data, including integration with barcode scanners, conveyors, and process equipment
2003: designed and led the implementation of a high-performance, cross-platform, store-and-forward system for SOAP/XML messaging.
2003: designed and consulted on the implementation of a cross-platform event distribution ‘message bus’ component.
2002: member of a 2-person team that ported large Iona/Orbix-based applications to the ACE/TAO Corba orb and achieved interoperability between Orbix and TAO. If you’ve heard of Orbix or TAO you must be a specialist in the history of out of date technology and poor ideas that should never have seen the light of day. Just sayin’.
2000-2002: project lead for an assembly-line automation project for Guidant corporation. This was a high performance distributed control system using XML messaging. Project budget was approximately $800K.
2000: designed and implemented a Windows-based interactive tool for the construction of executable state machines, used in the Guidant automation project and others. The tool provided a UI for defining the state machine, and an editor to attach code to entry or exit from states, or to state transition arcs. It also generated a state diagram, to facilitate ease of understanding and documentation.
1999-2000: project lead or technical lead for numerous semiconductor equipment SECS/GEM implementations. Total project budgets approximately $500K. SECS: Semiconductor Equipment Communication Standard. GEM: Generic Equipment Model.
1996-1998: proposed, designed, and led the implementation of the SdiStation product line. These products brought SDi (Systematic Designs, Inc.) substantial license fees and enabled much of the SDi project work from 1997 through 2001. The rights to the product were sold to a competitor for approximately $1,000,000.
1995: technical lead for porting and major enhancements to factory automation software at the LSi Tsukuba (Japan) semiconductor factory. Project budget approximately $300K.
1995: major contributor to the implementation of equipment controllers for the Hyundai I’chon (Korea) semiconductor factory. Project budget approximately $400K.
1994: proposed, designed, and implemented a programming language (SPL) for the direct expression of executable Harel statecharts. The language was still in use at Object Engineering as of 2004, and enabled the company to survive and thrive in a very difficult equipment-control market. I also co-designed and implemented a language for defining SECS (Semiconductor Equipment Messaging Standard) messages for use in the OEI (Object Engineering, Inc.) SECS driver.
1994: designed and implemented the ‘business rules’ definition language and API, used in the SDi Material Control System (MCS) product.
1991-1993: designed and implemented the real-time look-ahead material dispatching system for the full factory automation system at LSi Tsukuba (Japan). Project budget approximately $1.5M.
1991: designed and implemented much of the core code libraries for the LSi Tsukuba factory automationproject. These libraries were later used on many other projects at SDi (Systematic Designs).
1991: designed and implemented a sophisticated event-driven scripting language for writing equipment emulators. This language was key to successful off-line testing of the LSi Tsukuba automation system.
University of Arizona Tucson, AZ Master of Science, Mathematics
University of Arizona Tucson, AZ Bachelor of Science, Mathematics Graduated Summa Cum Laude . Member: Phi Beta Kappa