Etsy Icon>

Code as Craft

Etsy’s Journey to TypeScript main image
Photo by BakeAndCut

Etsy’s Journey to TypeScript

  image

Over the past few years, Etsy’s Web Platform team has spent a lot of time bringing our frontend code up to date. It was only a year and a half ago that we modernized our Javascript build system in order to enable advanced features, things like arrow functions and classes, that have been added to the language since 2015. And while this upgrade meant that we had futureproofed our codebase and could write more idiomatic and scalable Javascript, we knew that we still had room for improvement.

Etsy has been around for over sixteen years. Naturally, our codebase has become quite large; our monorepo has over seventeen thousand JavaScript files in it, spanning many iterations of the site. It can be hard for a developer working in our codebase to know what parts are still considered best practice, and which parts follow legacy patterns or are considered technical debt. The JavaScript language itself complicates this sort of problem even further — in spite of the new syntax features added to the language over the past few years, JavaScript is very flexible and comes with few enforceable limitations on how it is used. This makes it notoriously challenging to write JavaScript without first researching the implementation details of any dependencies you use. While documentation can help alleviate this problem somewhat, it can only do so much to prevent a JavaScript library from being used improperly, which can ultimately lead to unreliable code.

All of these problems (and many more!) were ones that we felt TypeScript might be able to solve for us. TypeScript bills itself as a “superset of Javascript.” In other words, TypeScript is everything in Javascript with the optional addition of types. Types, in programming, are basically ways to declare expectations about the data that moves through code: what kinds of input can be used by a function, what sorts of values a variable can hold. (If you’re not familiar with the concept of types, TypeScript’s handbook has a fantastic introduction.) TypeScript is designed to be easily adopted incrementally in existing Javascript projects, particularly in large codebases where shifting to a new language can be an impossibility. It is exceptionally good at inferring types from the code you’ve already written, and it has a type syntax nuanced enough to properly describe all of the quirks that are common in Javascript. Plus, it’s developed by Microsoft, it’s already in use at companies like Slack and AirBnB, and is by far the most used and loved flavor of Javascript according to last year’s “State of JS” survey. If we were going to use types to bring some amount of order to our codebase, TypeScript seemed like a really solid bet.

This is why, hot on the heels of a migration to ES6, we started investigating a path to adopting TypeScript. This post is all about how we designed our approach, some of the fun technical challenges that resulted, and what it took to educate an Etsy-sized company in a new programming language.

Adopting TypeScript, at a high level

I don’t want to spend a lot of time selling TypeScript to you because there are plenty of other articles and talks that do a really good job of exactly that. Instead, I want to talk about the effort it took to roll out TypeScript support at Etsy, which involved technical implementations beyond turning Javascript into TypeScript. It also included a great deal of planning, educating, and coordinating. In hindsight, all of these ingredients seem obvious, but getting the details right surfaced a bunch of learning experiences worth sharing. To start, let’s talk about what we wanted our adoption to look like.

Strategies for adoption

TypeScript can be more or less “strict” about checking the types in your codebase. To quote the TypeScript handbook, a stricter TypeScript configuration “results in stronger guarantees of program correctness.” You can adopt TypeScript’s syntax and its strictness as incrementally as you’d like, by design. This feature makes it possible to add TypeScript to all sorts of codebases, but it also makes “migrating a file to TypeScript” a bit of a loosely-defined target. Many files need to be annotated with types in order for TypeScript to fully understand them. There are also plenty of Javascript files that can be turned into valid TypeScript just by changing their extension from .js to .ts. However, even if TypeScript understands a file just fine, that file might benefit from even more specific types which could improve its usefulness to other developers.

There are countless articles from companies of all sizes on their approach to migrating to TypeScript, and all of them provide compelling arguments for different migration strategies. For example, AirBnB automated as much of their migration as possible. Other companies enabled less-strict TypeScript across their projects, adding types to code over time.

Deciding the right approach for Etsy meant answering a few questions about our migration:

  • How strict do we want our flavor of TypeScript to be?
  • How much of our codebase do we want to migrate?
  • How specific do we want the types we write to be?

We decided that strictness was a priority; it is a lot of effort to adopt a new language, and if we’re using TypeScript, we may as well take full advantage of its type system (plus, TypeScript’s checker performs better with stricter types). We also knew that Etsy’s codebase is quite large; migrating every single file was probably not a good use of our time, but ensuring we had types for new and frequently updated parts of our site was important. And of course, we wanted our types to be as helpful and as easy to use as possible.

What we went with

Our adoption strategy looked like this:

  1. Make TypeScript as strict as reasonably possible, and migrate the codebase file-by-file.
  2. Add really good types and really good supporting documentation to all of the utilities, components, and tools that product developers use regularly.
  3. Spend time teaching engineers about TypeScript, and enable TypeScript syntax team by team.

Let’s look at each of these points a little more closely.

Gradually migrate to strict TypeScript

Strict TypeScript can prevent a lot of very common errors, so we figured being as strict as possible made the most sense. The downside of this decision is that most of our existing Javascript would need type annotations. It also required us to migrate our codebase file-by-file. If we had tried to convert everything all at once using strict TypeScript, we would have ended up with a lengthy backlog of issues to be addressed. As I mentioned before, our monorepo has over seventeen thousand Javascript files in it, many of which don’t change very often. We chose to focus our efforts on typing actively-developed areas of the site, clearly delineating between which files had reliable types and which ones didn’t using .js and .ts file extensions respectively.

Migrating all at once could make improving existing types logistically difficult, especially in a monorepo. If you import a TypeScript file with an existing suppressed type error in it, should you fix the error? Does it mean that your file’s types need to be different to accommodate potential problems from this dependency? Who owns that dependency, and is it safe to edit? As our team has learned, every bit of ambiguity that can be removed enables an engineer to make an improvement on their own. With an incremental migration, any file ending in .ts or .tsx can be trusted to have reliable types.

Make sure utilities and tools have good TypeScript support

Before our engineers started writing TypeScript, we wanted all of our tooling to support TypeScript and all of our core libraries to have usable, well-defined types. Using an untyped dependency in a TypeScript file can make code hard to work with and can introduce type errors; while TypeScript will try its best to infer the types in a non-TypeScript file, it defaults to “any” if it can’t. Put another way, if an engineer is taking the time to write TypeScript, they should be able to trust that the language is catching their type errors as they write code. Plus, forcing engineers to write types for common utilities while they’re trying to learn a new language and keep up with their team’s roadmap is a good way to get people to resent TypeScript. This was not a trivial amount of work, but it paid off massively. I’ll get into the details about this in the “Technical Details” section below.

Educate and onboard engineers team by team

We spent quite a bit of time on education around TypeScript, and that is the single best decision we made during our migration. Etsy has several hundred engineers, and very few of them had TypeScript experience before this migration (myself included). We realized that, in order for our migration to be successful, people would have to learn how to use TypeScript first. Turning on the switch and telling everyone to have at it would probably leave people confused, overwhelm our team with questions, and hurt the velocity of our product engineers. By onboarding teams gradually, we could work to refine our tooling and educational materials. It also meant that no engineer could write TypeScript without their teammates being able to review their code. Gradual onboarding gave our engineers time to learn TypeScript and to factor it into their roadmaps.

Technical details (the fun stuff)

There were plenty of fun technical challenges during the migration. Surprisingly, the easiest part of adopting TypeScript was adding support for it to our build process. I won’t go into a ton of details about this because build systems come in many different flavors, but in short:

  • We use Webpack to build our Javascript. Webpack transpiles our modern Javascript into older, more compatible Javascript using Babel.
  • Babel has a lovely plugin called babel-preset-typescript that quickly turns TypeScript into Javascript, but expects you to do your own type-checking.
  • To check our types, we run the TypeScript compiler as part of our test suite, and configure it not to actually transpile any files using its noEmit option.

All of the above took a week or two, and most of that time was spent validating that TypeScript we sent to production didn’t behave strangely. The rest of our tooling around TypeScript took much more time and turned out to be a lot more interesting.

Improving type specificity with typescript-eslint

At Etsy, we make heavy use of custom ESLint linting rules. They catch all sorts of bad patterns for us, help us deprecate old code, and keep our pull request comments on-topic and free of nitpicks. If it’s important, we try to write a lint rule for it. One place that we found an opportunity for linting was in enforcing type specificity, which I generally use to mean “how accurately a type fits the thing it is describing”.

For example, imagine a function that takes in the name of an HTML tag and returns an HTML element. The function could accept any old string as an argument, but if it uses that string to create an element, then it would be nice to make sure that the string was in fact the name of a real HTML element.

// This function type-checks, but I could pass in literally any string in as an argument.
function makeElement(tagName: string): HTMLElement {
   return document.createElement(tagName);
}
// This throws a DOMException at runtime
makeElement("literally anything at all");

If we put in a little effort to make our types more specific, it’ll be a lot easier for other developers to use our function properly.

// This function makes sure that I pass in a valid HTML tag name as an argument.
// It makes sure that ‘tagName’ is one of the keys in 
// HTMLElementTagNameMap, a built-in type where the keys are tag names 
// and the values are the types of elements.
function makeElement(tagName: keyof HTMLElementTagNameMap): HTMLElement {
   return document.createElement(tagName);
}
// This is now a type error.
makeElement("literally anything at all");
// But this isn't. Excellent!
makeElement("canvas");

Moving to TypeScript meant that we had a lot of new practices we needed to think about and lint for. The typescript-eslint project provided us with a handful of TypeScript-specific rules to take advantage of. For instance, the ban-types rule let us warn against using the generic Element type in favor of the more specific HTMLElement type.

We also made the (somewhat controversial) decision not to allow non-null assertions and type assertions in our codebase. The former allows a developer to tell TypeScript that something isn’t null when TypeScript thinks it might be, and the latter allows a developer to treat something as whatever type they choose.

// This is a constant that might be ‘null’.
const maybeHello = Math.random() > 0.5 ? "hello" : null;
// The `!` below is a non-null assertion. 
// This code type-checks, but fails at runtime.
const yellingHello = maybeHello!.toUpperCase()
// This is a type assertion.
const x = {} as { foo: number };
// This code type-checks, but fails at runtime.
x.foo;

Both of these syntax features allow a developer to override TypeScript’s understanding about the type of something. In many cases, they both imply a deeper problem with a type that probably needs to be fixed. By doing away with them, we force our types to be more specific about what they’re describing. For instance, you might be able to use “as” to turn an Element into an HTMLElement, but you likely meant to use an HTMLElement in the first place. TypeScript itself has no way of disabling these language features, but linting allows us to identify them and keep them from being deployed.

Linting is really useful as a tool to deter folks from bad patterns, but that doesn’t mean that these patterns are universally bad — with every rule, there are exceptions. The nice thing about linting is that it provides a reasonable escape hatch. Should we really, really need to use “as”, we can always add a one-off linting exception.

// NOTE: I promise there is a very good reason for us to use `as` here.
// eslint-disable-next-line @typescript-eslint/consistent-type-assertions
const x = {} as { foo: number };

Adding types to our API

We wanted our developers to write effective TypeScript code, so we needed to make sure that we provided types for as much of the development environment as possible. At first glance, this meant adding types to our reusable design components, helper utilities, and other shared code. But ideally any data a developer might need to access should come with its own types. Almost all of the data on our site passes through the Etsy API, so if we could provide types there, we’d get coverage for much of our codebase very quickly.

Etsy’s API is implemented in PHP, and we generate both PHP and Javascript configurations for each endpoint to help simplify the process of making a request. In Javascript, we use a light wrapper around the fetch API called EtsyFetch to help facilitate these requests. That all looks loosely like this:

// This function is generated automatically.
function getListingsForShop(shopId, optionalParams = {}) {
   return {
       url: `apiv3/Shop/${shopId}/getLitings`,
       optionalParams,
   };
}
// This is our fetch() wrapper, albeit very simplified.
function EtsyFetch(config) {
   const init = configToFetchInit(config);
   return fetch(config.url, init);
}
// Here's what a request might look like (ignoring any API error handling).
const shopId = 8675309;
EtsyFetch(getListingsForShop(shopId))
   .then((response) => response.json())
   .then((data) => {
       alert(data.listings.map(({ id }) => id));
   });

This sort of pattern is common throughout our codebase. If we didn’t generate types for our API responses, developers would have to write them all out by hand and hope that they stayed in sync with the actual API. We wanted strict types, but we also didn’t want our developers to have to bend over backwards to get them.

We ended up taking advantage of some work that our own developer API uses to turn our endpoints into OpenAPI specifications. OpenAPI specs are standardized ways to describe API endpoints in a format like JSON. While our developer API used these specs to generate public-facing documentation, we could also take advantage of them to generate TypeScript types for our API’s responses. We spent a lot of time writing and refining an OpenAPI spec generator that would work across all of our internal API endpoints, and then used a library called openapi-typescript to turn those specs into TypeScript types. Once we had TypeScript types generated for all of our endpoints, we still needed to get them into the codebase in a usable way. We decided to weave the generated response types into our generated configs, and then to update EtsyFetch to use these types in the Promise that it returned. Putting all of that together looks roughly like this:

// These types are globally available:
interface EtsyConfig {
   url: string;
}
interface TypedResponse extends Response {
   json(): Promise;
}
// This is roughly what a generated API config file looks like:
import OASGeneratedTypes from "api/oasGeneratedTypes";
type JSONResponseType = OASGeneratedTypes["getListingsForShop"];
function getListingsForShop(shopId): EtsyConfig {
   return {
       url: `apiv3/Shop/${shopId}/getListings`,
   };
}
// This is (looooosely) what EtsyFetch looks like:
function EtsyFetch(config: EtsyConfig) {
   const init = configToFetchInit(config);
   const response: Promise> = fetch(config.url, init);
   return response;
}
// And this is what our product code looks like:
EtsyFetch(getListingsForShop(shopId))
   .then((response) => response.json())
   .then((data) => {
       data.listings; // "data" is fully typed using the types from our API
   });

The results of this pattern were hugely helpful. Existing calls to EtsyFetch now had strong types out-of-the-box, no changes necessary. Plus, if we were to update our API in a way that would cause a breaking change in our client-side code, our type-checker would fail and that code would never make it to production.

Typing our API also opened the door for us to use the API as a single source of truth between the backend and the browser. For instance, if we wanted to make sure we had a flag emoji for all of the locales our API supported, we could enforce that using types:

​​type Locales  OASGeneratedTypes["updateCurrentLocale"]["locales"];
const localesToIcons : Record = {
   "en-us": "🇺🇸",
   "de": "🇩🇪",
   "fr": "🇫🇷",
   "lbn": "🇱🇧",
   //... If a locale is missing here, it would cause a type error.
}

Best of all, none of these features required changes to the workflows of our product engineers. As long as they used a pattern they were already familiar with, they got types for free.

Improving the dev experience by profiling our types

A big part of rolling out TypeScript was paying really close attention to complaints coming from our engineers. While we were still early in our migration efforts, a few folks mentioned that their editors were sluggish when providing type hints and code completions. Some told us they were waiting almost half a minute for type information to show up when hovering over a variable, for example. This problem was extra confusing considering we could run the typechecker across all of our TS files in well under a minute; surely type information for a single variable shouldn’t be that expensive.

We were lucky enough to get a meeting with some of the maintainers of the TypeScript project. They were interested in seeing TypeScript be successful in a unique codebase like Etsy’s. They were also quite surprised to hear about our editor challenges, and were even more surprised when TypeScript took almost 10 full minutes to check our whole codebase, unmigrated files and all.

After some back-and-forth to make sure we weren’t including more files than we needed, they pointed me to the performance tracing feature that they had just introduced at the time. The trace made it pretty apparent that TypeScript had a problem with one of our types when it tried to typecheck an unmigrated Javascript file. Here is the trace for that file (width here represents time).

As it turned out, we had a circular dependency in our types for an internal utility that helps us create immutable objects. These types had worked flawlessly for all the code we had worked with so far, but had problems with some of its uses in yet-to-be-migrated parts of the codebase that created an infinite type loop. When someone would open a file in those parts of the codebase, or when we ran the typechecker on all of our code, TypeScript would hit the infinite loop, spend a lot of time trying to make sense of that type, and would then give up and log a type error. Fixing that type reduced the time it took to check that one file from almost 46 seconds to less than 1.

This type caused problems in other places as well. After applying the fix, checking the entire codebase took about a third of the time and reduced our memory usage by a whole gig:

If we hadn’t caught this problem, it would have eventually made our tests (and therefore our production deploys) a whole lot slower. It also would have made writing TypeScript really, really unpleasant for everyone.

Education

The biggest hurdle to adopting TypeScript is, without question, getting everyone to learn TypeScript. TypeScript works better the more types there are. If engineers aren’t comfortable writing TypeScript code, fully adopting the language becomes an uphill battle. As I mentioned above, we decided that a team-by-team rollout would be the best way to build some institutional TypeScript muscles.

Groundwork

We started our rollout by working directly with a small number of teams. We looked for teams that were about to start new projects with relatively flexible deadlines and asked them if they were interested in writing them in TypeScript. While they worked, our only job was to review their pull requests, implement types for modules that they needed, and pair with them as they learned.

During this time, we were able to refine our types and develop documentation tailored specifically to tricky parts of Etsy’s codebase. Because there were only a handful of engineers writing TypeScript, it was easy to get direct feedback from them and untangle issues they ran into quickly. These early teams informed a lot of our linting rules, and they helped make sure our documentation was clear and useful. It also gave us the time we needed to complete some of the technical portions of our migration, like adding types to the API.

Getting teams educated

Once we felt that most of the kinks had been ironed out, we decided to onboard any team that was both interested and ready. In order to prepare teams to write TypeScript, we asked them to first complete some training. We found a course from ExecuteProgram that we thought did a good job of teaching the basics of TypeScript in an interactive and effective way. All members of a team would need to complete this course (or have some amount of equivalent experience) before we considered them ready to onboard.

To entice people into taking a course on the internet, we worked to get people excited for TypeScript in general. We got in touch with Dan Vanderkam, the author of Effective TypeScript, to see if he’d be interested in giving an internal talk (he said yes, and his talk and book were both superb). Separately, I designed some extremely high-quality virtual badges that we gave to people at the midpoint and at the end of their coursework, just to keep them motivated (and to keep an eye on how quickly people were learning TypeScript).

We then encouraged newly onboarded teams to set some time aside to migrate JS files that their team was responsible for. We found that migrating a file that you’re already familiar with is a great way to learn how to use TypeScript. It’s a direct, hands-on way to work with types that you can then immediately use elsewhere. In fact, we decided against using more sophisticated automatic migration tools (like the one AirBnB wrote) in part because it took away some of these learning opportunities. Plus, an engineer with a little bit of context can migrate a file much more effectively than a script could.

The logistics of a “team-by-team roll-out”

Onboarding teams one at a time meant that we had to prevent individual engineers from writing TypeScript before the rest of their team was ready. This happened more often than you’d think; TypeScript is a really cool language and people were eager to try it out, especially after seeing it being used in corners of our codebase. To prevent this sort of premature adoption, we wrote a simple git commit hook to disallow TypeScript changes from users who weren’t part of a safelist. When a team was ready, we’d simply add them to the safelist.

Separately, we worked hard to develop direct communications with the engineering managers on every team. It’s easy to send out an email to the whole engineering department, but working closely with each manager helped to ensure no one was surprised by our rollout. It also gave us a chance to work through concerns that some teams had, like finding the time to learn a new language. It can be burdensome to mandate a change, especially in a large company, but a small layer of direct communication went a long way (even if it did require a sizable spreadsheet to keep track of all the teams).

Supporting teams after onboarding

Reviewing PRs proved to be a really good way to catch problems early, and it informed a lot of our subsequent linting rules. To help with the migration, we decided to explicitly review every PR with TypeScript in it until the rollout was well underway. We scoped our reviews to the syntax itself and, as we grew, solicited help from engineers who had successfully onboarded. We called the group the TypeScript Advisors, and they became an invaluable source of support to newly-minted TypeScript Engineers.

One of the coolest parts about the rollout was how organically a lot of the learning process happened. Unbeknownst to us, teams held big pairing sessions where they worked through problems or tried to migrate a file together. A few even started book clubs to read through TypeScript books. Migrations like these are certainly a lot of work, but it’s easy to forget how much of that work is done by passionate coworkers and teammates.

Where are we now?

As of earlier this fall, we started requiring all new files to be written in TypeScript. About 25% of our files have types, and that number doesn’t account for deprecated features, internal tools, and dead code. Every team has been successfully onboarded to TypeScript at the time of writing.

“Finishing a migration to TypeScript” isn’t something with a clear definition, especially for a large codebase. While we’ll likely have untyped Javascript files in our repo for a while yet, almost every new feature we ship from here on will be typed. All of that aside, our engineers are already writing and using TypeScript effectively, developing their own tooling, starting really thoughtful conversations about types, and sharing articles and patterns that they found useful. It’s hard to know for sure, but people seem to be enjoying a language that almost no one had experience with this time last year. To us, that feels like a successful migration.

Appendix: A list of learning resources

If our adventure with TypeScript has left you interested in adopting it in your own codebase, here are some resources that I found useful.

  • TypeScript’s Handbook is a fantastic resource if you’re brand new to TypeScript.
  • I personally learned a lot about TypeScript, both the language and its adoption, from Effective TypeScript by Dan Vanderkam.
  • The TypeScript project’s Performance wiki is a trove of useful advice for ensuring a performant TypeScript environment.
  • If you want to get really good at writing complex types for your codebase’s weird libraries, the type-challenges repo is a great way to get experience.