RW

Hello, I'm Rick.

Understanding code — at scale, across languages, and across teams — is one of the hardest problems in software. It's a challenge for humans and AI alike, and I've spent my career building tools that help bridge that gap. I've worked for over a decade in developer tooling, including ten years at GitHub building large-scale code intelligence platforms. I love designing APIs, CLIs, and user-facing tools, and I care deeply about fast feedback loops, profiling, performance, and the craft of making code more understandable.

While at Nuanced, I built Nuanced LSP, an easy-to-use LSP server multiplexer leveraging a container-based architecture. I built a TypeScript SDK for Nuanced LSP, along with a CLI, to make it easy to adopt in development workflows for humans and agents. I also built Eval-Agent, an evaluation harness to analyze multi-turn agent traces with an emphasis on tool calling and token usage to systematically measure and evaluate an agent's adoption of code intelligence tooling and its impact on task outcomes—focusing on the key metrics of task duration, cost, and tokens.

While at GitHub, I contributed heavily to the Tree-sitter ecosystem of language parsers, developed an experimental programming language analysis framework named Semantic, built the distributed system powering GitHub code navigation, helped implement a novel approach to name binding resolution named Stack Graphs which powered GitHub precise code navigation, added code navigation to the PR view, contributed to the initial version of GitHub's AI assistant chat service, wrote the core prompt building library to manage dynamic prompt contexts, and was a core member on GitHub's code-search team and helped build the system named Blackbird.

My initial passion as an undergraduate student was human languages (Japanese, Mandarin Chinese, and Korean). I translated this love for languages into a love of programming languages. The majority of my career as a software engineer has been focused on building tools and services to parse and analyze source code across a wide variety of programming languages. More recently I've become interested in the problems with AI code generation within large, complex codebases, and how program analysis techniques can help provide better code intelligence context to help generate more reliable and valuable code while simultaneously reducing token usage.

I love to program and enjoy programming in a variety of languages. I've worked at a high level across system languages (Go and Rust), dynamic languages (Ruby, Elixir / Erlang, and Python), and pure functional languages (Haskell). I have a deep appreciation for the lambda calculus, combinator theory, type theory, and functional programming. I also love the challenges of working on large scale distributed systems, designing and implementing concurrent solutions, profile-guided optimization, and working on extremely large-scale systems (10k+ requests / sec).

Outside of work, I'm nearing the completion of a MSc in the Software Engineering Programme at Oxford University. I expect to complete the program in May 2026. When I'm not studying, I enjoy the planning and precision of woodworking, the physical challenge of surfing, and attempting to make a perfect Neapolitan pizza. Thanks for visiting.