Etsy’s experiment with immutable documentation

Posted by on October 10, 2018 / 4 Comments

Introduction

Writing documentation is like trying to hit a moving target. The way a system works changes constantly, so as soon as you write a piece of documentation for it, it starts to get stale. And the systems that need docs the most are the ones being actively used and worked on, which are changing the fastest. So the most important docs go stale the fastest! 1

Etsy has been experimenting with a radical new approach: immutable documentation.

Woah, you just got finished talking about how documentation goes stale! So doesn’t that mean you have to update it all the time? How could you make documentation read-only?

How docs go stale

Let’s back up for a sec. When a bit of a documentation page becomes outdated or incorrect, it typically doesn’t invalidate the entire doc (unless the system itself is deprecated). It’s just a part of the doc with a code snippet, say, which is maybe using an outdated syntax for an API.

For example, we have a command-line tool called dbconnectthat lets us query the dev and prod databases from our VMs. Our internal wiki has a doc page that discusses various tools that we use to query the dbs. The part that discusses ‘dbconnect’ goes something like:

 

Querying the database via dbconnect ...

((section 1))
dbconnect is a script to connect to our databases and query them. [...]

((section 2))
The syntax is:

% dbconnect <shard>

 

Section 1 gives context about dbconnect and why it exists, and section 2 gives tactical details of how to use it.

Now say a switch is added so that dbconnect --dev <shard> queries the dev db, and dbconnect --prod <shard> queries the prod db. Section 2 above now needs to be updated, because it’s using outdated syntax for the dbconnect command. But the contextual description in section 1 is still completely valid. So this doc page is now technically stale as a whole because of section 2, but the narrative in section 1 is still very helpful!

In other words, the parts of the doc that’s most likely to go stale are the tactical, operational details of the system. How to use the system is constantly changing. But the narrative of why the system exists and the context around it is less likely to change quite so quickly.

 

How to use the system is constantly changing. But the narrative of why the system exists and the context around it is less likely to change quite so quickly.

 

Docs can be separated into how-docs and why-docs

Put another way: ‘code tells how, docs tell why’  2. Code is constantly changing, so the more code you put into your docs, the faster they’ll go stale. To codify this further, let’s use the term “how-doc” for operational details like code snippets, and “why-doc” for narrative, contextual descriptions  3. We can mitigate staleness by limiting the amount we mix the how-docs with the why-docs.

 

We can mitigate staleness by limiting the amount we mix the how-docs with the why-docs.

 

Documenting a command using Etsy’s FYI system

At Etsy we’ve developed a system for adding how-docs directly from Slack. It’s called “FYI”. The purpose of FYI is to make documenting tactical details — commands to run, syntax details, little helpful tidbits — as frictionless as possible.

 

FYI is a system for adding how-docs directly from Slack.

 

Here’s how we’d approach documenting dbconnect using FYIs 4:

Kaley was searching the wiki for how to connect to the dbs from her VM, to no avail. So she asks about it in a Slack channel:

hey @here anyone remember how to connect to the dbs in dev? I forget how. It’s something like dbconnect etsy_shard_001A but that’s not working

When she finds the answer, she adds an FYI using the ?fyi command (using our irccat integration in Slack 5):

?fyi connect to dbs with `dbconnect etsy_shard_000_A` (replace `000` with the shard number). `A` or `B` is the side

Jason sees Kaley add the FYI and mentions you can also use dbconnect to list the databases:

you can also do `dbconnect -l` to get a list of all DBs/shards/etc, and it works for dev-proxy on or off

Kaley then adds the :fyi: Slack reaction (reacji) to his comment to save it as an FYI:

you can also do `dbconnect -l` to get a list of all DBs/shards/etc, and it works for dev-proxy on or off

A few weeks later, Paul-Jean uses the FYI query command ?how to search for info on connecting to the databases, and finds Kaley’s FYI 6:

?how database connect

He then looks up FYIs mentioning dbconnect specifically to discover Jason’s follow-up comment:

?how dbconnect

But he notices that the dbconnect command has been changed since Jason’s FYI was added: there is now a switch to specify whether you want dev or prod databases. So he adds another FYI to supplement Jason’s:

?fyi to get a list of all DBs/shards/etc in dev, use `dbconnect --dev`, and to list prod DBs, use `dbconnect --prod` (default)

Now ?how dbconnect returns Paul-Jean’s FYI first, and Jason’s second:

?how dbconnect

FYIs trade completeness for freshness

Whenever you do a ?how query, matching FYIs are always returned most recent first. So you can always update how-docs for dbconnect by adding an FYI with the keyword “dbconnect” in it. This is crucial, because it means the freshest docs always rise to the top of search results.

FYIs are immutable, so Paul-Jean doesn’t have to worry about changing any FYIs created by Jason. He just adds them as he thinks of them, and the timestamps determine the priority of the results. How-docs change so quickly, it’s easier to just replace them than try to edit them. So they might as well be immutable.

 

How-docs change so quickly, it’s easier to just replace them than try to edit them. So they might as well be immutable.

 

Since every FYI has an explicit timestamp, it’s easy to gauge how current they are relative to API versions, OS updates, and other internal milestones. How-docs are inherently stale, so they might as well have a timestamp showing exactly how stale they are.

 

How-docs are inherently stale, so they might as well have a timestamp showing exactly how stale they are.

 

The tradeoff is that FYIs are just short snippets. There’s no room in an FYI to add much context. In other words, FYIs mitigate staleness by trading completeness for freshness.

 

FYIs mitigate staleness by trading completeness for freshness

 

Since FYIs lack context, there’s still a need for why-docs (eg a wiki page) about connecting to dev/prod dbs, which mentions the dbconnect  command along with other relevant resources. But if the how-docs are largely left in FYIs, those why-docs are less likely to go stale.

So FYIs allow us to decouple how-docs from why-docs. The tactical details are probably what you want in a hurry. The narrative around them is something you sit back and read on a wiki page.

 

FYIs allow us to decouple how-docs from why-docs

What FYIs are

To summarize, FYIs are:

What FYIs are NOT

Similarly, FYIs are NOT:

Conclusions

Etsy has recognized that technical documentation is a mixture of two distinct types: a narrative that explains why a system exists (“why-docs”), and operational details that describe how to use the system (“how-docs”). In trying to overcome the problem of staleness, the crucial observation is that how-docs typically change faster than why-docs do. Therefore the more how-docs are mixed in with why-docs in a doc page, the more likely the page is to go stale.

We’ve leveraged this observation by creating an entirely separate system to hold our how-docs. The FYI system simply allows us to save Slack messages to a persistent data store. When someone posts a useful bit of documentation in a Slack channel, we tag it with the :fyi: reacji to save it as a how-doc. We then search our how-docs directly from Slack using a bot command called ?how.

FYIs are immutable: to update them, we simply add another FYI that is more timely and correct. Since FYIs don’t need to contain narrative, they’re easy to add, and easy to update. The ?how command always returns more recent FYIs first, so fresher matches always have higher priority. In this way, the FYI system combats documentation staleness by trading completeness for freshness.

We believe the separation of operational details from contextual narrative is a useful idea that can be used for documenting all kinds of systems. We’d love to hear how you feel about it! And we’re excited to hear about what tooling you’ve built to make documentation better in your organization. Please get in touch and share what you’ve learned. Documentation is hard! Let’s make it better!

Acknowledgements

The FYI system was designed and implemented by Etsy’s FYI Working Group: Paul-Jean Letourneau, Brad Greenlee, Eleonora Zorzi, Rachel Hsiung, Keyur Govande, and Alec Malstrom. Special thanks to Mike Lang, Rafe Colburn, Sarah Marx, Doug Hudson, and Allison McKnight for their valuable feedback on this post.

References

  1. From “The Golden Rules of Code Documentation”: “It is almost impossible without an extreme amount of discipline, to keep external documentation in-sync with the actual code and/or API.”
  2. Derived from “code tells what, docs tell why” in this HackerNoon post.
  3. The similarity of the terms “how-doc” and “why-doc” to the term here-doc is intentional. For any given command, a here-doc is used to send data into the command in-place, how-docs are a way to document how to use the command, and why-docs are a description of why the command exists to begin with.
  4. You can replicate the FYI system with any method that allows you save Slack messages to a predefined, searchable location. So for example, one could simply install the Reacji Channeler bot, which lets you assign a Slack reacji of your choosing to cause the message to be copied to a given channel. So you could assign an “fyi” reacji to a new channel called “#fyi”, for example. Then to search your FYIs, you would simply go to the #fyi channel and search the messages there using the Slack search box.
  5. When the :fyi: reacji is added to a Slack message (or the ?fyi irccat command is used), an outgoing webhook sends a POST request to irccat.etsy.com with the message details. This triggers a PHP script to save the message text to a SQLite database, and sends an acknowledgement back to the Slack incoming webhook endpoint. The acknowledgement says “OK! Added your FYI”, so the user knows their FYI has been successfully added to the database.
  6. Searching FYIs using the ?how command uses the same architecture as for adding an FYI, except the PHP script queries the SQLite table, which supports full-text search via the FTS plugin.

4 Comments

How Etsy Handles Peeking in A/B Testing

Posted by and on October 3, 2018 / 1 Comment

Etsy relies heavily on experimentation to improve our decision-making process. We leverage our internal A/B testing tool when we launch new features, polish the look and feel of our site, or even make changes to our search and recommendation algorithms. For years, Etsy has prided ourselves on our culture of continuous experimentation. However, as our experimentation platform scales and the velocity of experimentation increases rapidly across the company, we also face a number of new challenges. In this post, we investigate one of these challenges: how to peek at experimental results early in order to increase the velocity of our decision-making without sacrificing the integrity of our results.

The Peeking Problem

In A/B testing, we’re looking to determine if a metric we care about (i.e. percentage of visitors who make a purchase) is different between the control and treatment groups. But when we detect a change in the metric, how do we know if it is real or due to random chance? We can look at the p-value of our statistical test, which indicates the probability we would see the detected difference between groups assuming there is no true difference. When the p-value falls below the significance level threshold we say that the result is statistically significant and we reject the hypothesis that the control and treatment are the same.

So we can just stop the experiment when the hypothesis test for the metric we care about has a p-value of less than 0.05, right? Wrong. In order to draw the strongest conclusions from the p-value in the context of an A/B test, we have to have fixed the sample size of an experiment in advance, and to only make a decision on the p-value once. Peeking at data regularly and stopping an experiment as soon as the p-value dips below 0.05 increases the rate of Type I errors, or false positives, because the false positive of each test compounds increasing the overall probability that you’ll see a false result.

Let’s look at an example to gain a more concrete view of the problem. Suppose we run an experiment where there is no true change between the control and experimental variant and both have a baseline target metric of 50%. If we are using a significance level of 0.1 and there is no peeking, in other words, the sample size needed before a decision is made is determined in advance, then the rate of false positives is 10%. However, if we do peek and we check the significance level at every observation, then after 500 observations, there is over a 50% chance of incorrectly stating that treatment is different than the control (Figure 1).

Figure 1: Chances for accepting that A and B are different, with A and B both converting at 50%.

At this point, you might already have figured that the simplest way to solve the problem would be to fix a sample size in advance and run an experiment until the end before checking the significance level. However, this requires strictly enforced separation between the design and analysis of experiments which can have large repercussions throughout the experimental process.  In early stages of an experiment, we may miss a bug in the set up or with the feature being tested that will invalidate our results later. If we don’t catch these early, it slows down our experimental process unnecessarily, leaving less time for iterations and real site changes. Another issue involved in set up is that it can be difficult to predict the effect size product teams would like to obtain prior to the experiment, which can make it hard to optimize the sample size in advance.  Even assuming we set up our experiment perfectly, there are down the line implications. If an experiment is impacting a metric in a negative way, we want to be aware as soon as possible so we don’t negatively affect our users’ experience. These considerations become even more pronounced when we’re running an experiment on a small population, or in a less trafficked part of the site and it can take months to reach the target sample size.  Across teams, we want to be able to iterate quickly without sacrificing the integrity of our results.

With this in mind, we need to come up with statistical methodology that will give reliable inference while still providing product teams the ability to continuously monitor experiments, especially for our long-running experiments. At Etsy, we tackle this challenge from two sides, user interface and statistical procedures. We made a few user interface changes to our A/B testing tool to prevent our stakeholders from drawing false conclusions, and we implemented a flexible p-value stopping-point in our platform, which takes inspiration from the sequential testing concept in statistics.

It is worth noting that the peeking problem has been studied by many, including industry veterans1, 2, developers of large-scale commercial A/B testing platforms3, 4 and academic researchers5. Moreover, it is hardly a challenge exclusive to A/B testing on the web. The peeking problem has troubled the medical field for a long time; for example, medical scientists could peek at the results and stop a clinical trial early because of initial positive results, leading to flawed interpretations of the data6, 7.

Our Approach

In this section, we dive into the approach that we have designed and adapted to address the peeking problem: transitioning from traditional, fixed-horizon testing to sequential testing, and preventing peeking behaviors through user interface changes.

Sequential Testing with Difference in Converting Visits

Sequential testing, which has been widely used in clinical trials8, 9 and gained recent popularity for web experimentation10 , guarantees that if we end the test when the p-value is below a predefined threshold α , the false positive rate will be no more than α. It does so by computing the probabilities of false-positives at each potential stopping point using dynamic programming, assuming that our test statistic is normally distributed. Since we can compute these probabilities, we can then adjust the test’s p-value threshold, which in turn changes the false-positive chance, at every step so that the total false positive rate is below the threshold that we desire. Therefore, sequential testing enables concluding experiments as soon as the data justifies it, while also keeping our false positive rate in check.

We investigated a few methods including O’Brien-Fleming, Pocock and sequential testing using difference in successful observations. We ultimately settled on the last approach. Using the difference in successful observations, we look at the raw difference in converting visits and stop an experiment when this difference becomes large enough.  The difference threshold is only valid until we reach a total number of converted visits. This method is good for detecting small changes and does so quickly, which makes it most suitable for our needs. Nevertheless, we did consider some cons this method presented as well. Traditional power and significance calculations use proportion of successes whereas looking at difference in converted visits does not take into account total population size.  Because of this, we are more likely to reach the total number of converted visits before we see a large enough difference in converted visits with high baselines target metrics. This means we are more likely to miss a true change in these cases. Furthermore, it requires extra set up when an experiment is not evenly split across variants. We chose to use this method with a few adjustments for these shortcomings so we could increase our speed of detecting real changes between experimental groups.

Our implementation of this method is influenced by the approach Evan Miller described here. This method sets a threshold for difference between the control and treatment converted visits based on minimal detected effect and target false positive and negative rates.  If the experiment reaches or passes the threshold, we allow the experiment to end early. If this difference is not reached, we assess our results using the standard approach of a power analysis.  The combination of these methods creates a continuous p-value threshold for which we can safely stop an experiment when the p-value is under the curve. This threshold is lower near the beginning of an experiment and converges to our significance level as the experiment reaches our targeted power. This allows us to detect changes quicker with low baselines while not missing smaller changes for experiments with high baseline target metrics.

Figure 2: Example of a p-value threshold curve.

To validate this approach, we tested it on results from experimental simulations with various baselines and effect sizes using mock experimental conditions. Before implementing, we wanted to understand:

  1. What effect will this have on false positive rates?
  2. What effect does early stopping have on reported effect size and confidence intervals?
  3. How much faster will we get a signal for experiments with true changes between groups?

We found that when using a p-value curve tuned for a 5% false positive rate, our early stopping threshold does not materially increase the false positive rate and we can be confident of a directional change.  

One of the downfalls with stopping experiments early, however, is that with an effect size under ~5%, we tend to overestimate the impact and widen the confidence interval.  To accurately attribute increases in metrics to experimental wins, we developed a haircut formula to apply to the effect size in metrics for experiments that we decide to end early.  Furthermore, we offset some of these by setting a standard of running experiments for at least 7 days to account for different weekend and weekday trends.

Figure 3: Reported Vs. True Effect Size

We tested this method with a series of simulations and saw that for experiments which would take 3 weeks to run assuming a standard power analysis, we could save at least a week in most cases where there was a real change between variants.  This helped us feel confident that even with a slight overestimation of effect size, it was worth the time savings for teams with low baselines target metrics who typically struggle with long experimental run times.

Figure 4: Day Savings From Sequential Testing

UI Improvements

In our experimental testing tool, we wanted stakeholders to have access to metrics and calculations we measure throughout the duration of the experiment. In additional to the p-value, we care about power and confidence interval.  First, power.  Teams at Etsy have to often coordinate experiments on the same page so it is important for teams to have an idea of how long an experiment will have to run assuming no early stopping. We do this by running an experiment until we reach a set power.

Second, Confidence interval (CI), is the range of values that are a good estimate of the true value in which we are confident a particular metric falls. In the context of A/B testing for example, if we ran the experiment millions of times, 90% of the time the true value of some effect size would fall within the 90% CI. There are three things that we care most about in relation to the confidence interval of an effect in an experiment:

  1. Whether the CI includes zero, because this maps exactly to the decision we would make with the p-value; if the 90% CI includes zero, then the p-value is greater than 0.1. Conversely, if it doesn’t include zero, then the p-value is less than 0.1;
  2. The smaller the CI, the better estimate of the parameter we have;
  3. The farther away from zero the CI is, the more confident we can be that there is a true difference.

Previously in our A/B testing tool UI, we displayed statistical data as shown in the table below on the left. The “observed” column indicates results for the control and there is a “% Change” column for each treatment variant. When hovering over a number in the “% Change” column, a popover table appears, showing the observed and actual effect size, confidence level, p-value, and number of days we could expect to have enough data to power the experiment based on our expected effect size. 

Figure 5: User interface before changes.

However, always displaying numerical results in the “% Change” column could lead to stakeholders peeking at data and making an incorrect inference about the success of the experiment. Therefore, we added a row in the hover table to show the power of the test (assuming some fixed effect size), and made the following changes to our user interface:

  1. Show a visualization of the C.I. and color the bar red when the C.I. is entirely negative to indicate a significant decrease, green when the C.I. is entirely positive to indicate a significant increase, and grey when the C.I. spans 0.
  2. Display different messages in the “% Change” column and hover table to indicate different stages the experiment metric is currently in, depending on its power, p-value and calculated flexible p-value threshold. In the “% Change” column, possible messages include “Waiting on data”, “Not enough data”, “No change” and “+/- X %” (to show significant increase/ decrease). In the hover table, possible headers include “metric is not powered”, “there is no detectable change”, “we’re confident we detected a change”, and “directional change is correct but magnitude might be inflated” when early stopping is reached but the metric is not powered yet.   

Figure 6: User interface after changes.

Even after making these UI changes, making a decision on when to stop an experiment and whether or not to launch it is not always simple. Generally some things we advise our stakeholders to consider are:

  1. Do we have statistically significant results that support our hypothesis?
  2. Do we have statistically significant results that are positive but aren’t what we anticipated?
  3. If we don’t have enough data yet, can we just keep it running or is it blocking other experiments?
  4. Is there anything broken in the product experience that we want to correct, even if the metrics don’t show anything negative?
  5. If we have enough information on the main metrics overall, do we have enough information to iterate? For example, if we want to look at impact on a particular segment, which could be 50% of the traffic, then we’ll need to run the experiment twice as long as we had to in order to look at the overall impact.

We hope that these UI changes will help our stakeholders make better informed decisions while still letting them uncover cases where they have changed something more dramatically than expected and thus can stop the experiment sooner.

Further Discussion

In this section, we discuss a few more issues we examined while designing Etsy’s solutions to peeking.

Trade-off Between Power and Significance

There is a trade-off between Type I (false positive) and Type II (false negative) errors – if we decrease the probability of one of the errors, the probability of the other will increase – for a more detailed explanation, please see this short post. This translates into a trade-off between p-value and power because if we require stronger evidence to reject the null hypothesis (i.e.  a smaller p-value threshold), then there is a smaller chance that we will be able to correctly reject a false null hypothesis a.k.a decreased power. The different messages we display on the user interface balance this issue to some degree. At the end, it is just a choice that we have to make based on our priorities and focus in experimentation.

Weekend vs. Weekday Data Sample Size

At Etsy, the volume of traffic and intent of visitors varies from weekdays to weekends. This is not a concern for the sequential testing approach that we ultimately chose. However, it would be an issue for some other methods that require equal daily data sample size. During our research, we looked into ways to handle the inconsistency in our daily data sample size. We found that the GroupSeq package in R, which enables the construction of group sequential designs and has various alpha spending functions available to choose among, is a good way to account for this.

Other Types of Designs

The sequential sampling method that we have designed is a straightforward form of a stopping rule modified to best suit our needs and circumstances. However, there are other types of sequential approaches that are more formally defined, such as the Sequential Probability Ratio Test (SPRT), which is utilized by Optimizely’s New Stats Engine4, and the Sequential Generalized Likelihood Ratio test, which has been used in clinical trials11. There has also been debate in both academic and industry about the effectiveness of Bayesian A/B testing in solving the peeking problem2, 5. It is indeed a very interesting problem!

Final Thoughts

Accurate interpretation of statistical data is crucial in making informed decisions about product development. When online experiments have to be run efficiently to save time and cost, we inevitably run into dilemmas unique to our context, and peeking is just one of them. In researching and designing solutions to this problem, we examined some more rigorous theoretical work. However, the characteristics and priorities in online experimentation makes the application of it difficult. Our approach outlined in this post, even though simple, addresses the root cause of the peeking problem effectively. Looking forward, we think the balance between statistical rigorousness and practical constraints is what makes online experimentation intriguing and fun to work on, and we at Etsy are very excited about tackling more interesting problems awaiting us.

This work is a collaboration between Callie McRee and Kelly Shen from the Analytics and Analytics Engineering teams. We would like to thank Gerald van den Berg, Emily Robinson, Evan D’Agostini, Anastasia Erbe, Mossab Alsadig, Lushi Li, Allison McKnight, Alexandra Pappas, David Schott and Robert Xu for helpful discussions and feedback.

References

  1. How Not to Run an A/B Test by Evan Miller
  2.  Is Bayesian A/B Testing Immune to Peeking? Not Exactly by David Robinson
  3.  Peeking at A/B tests: why it matters, and what to do about it by Johari et al., KDD’17
  4.  The New Stats Engine by Pekelis, et al., Optimizely
  5.  Continuous monitoring of A/B tests without pain: optional stopping in Bayesian testing by Deng, Lu, et al., CEUR’17
  6.  Trial sans Error: How Pharma-Funded Research Cherry-Picks Positive Results by Ben Goldacre of Scientific American, February 13, 2013
  7.  False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant by Simmons, Simonsohn, et al. (2011), Psychological Science, 22
  8. Interim Analyses and Sequential Testing in Clinical Trials by Nicole Solomon, BIOS 790, Duke University
  9. A Pocock approach to sequential meta-analysis of clinical trials by Shuster, J. J., & Neu, J. (2013), Research Synthesis Methods, 4(3), 10.1002/jrsm.1088
  10.  Simple Sequential A/B Testing by Evan Miller
  11.  Sequential Generalized Likelihood Ratio Tests for Vaccine Safety Evaluation by Shih, M.-C., Lai, T. L., Heyse, J. F. and Chen, J. (2010), Statistics in Medicine, 29: 2698-2708

1 Comment

How Etsy Localizes Addresses

Posted by on September 26, 2018 / 2 Comments

Imagine you’re browsing the web from your overpriced California apartment one day and you find a neat new website with some really cool stuff. You pick out a few items, add them to your cart, and start the checkout process. You get to the part where they ask for your shipping address and this is the form you see:

 

 

It starts off easy – you fill in your name, your street, your apartment number, and then you reach the field labelled “Post Town”. What is a “Post Town”? Huh. Next you see “County”. Well, you know what a county is, but since when do you list it in your address? Then there’s “Postal code”. You might recognize it as what the US calls a “zip code”, but it’s still confusing to see.

So now you don’t really know what to do, right? Where do you put your city, or your state? Do you just cram your address into the form however you can and hope that you get your order? Or do you abandon your cart and decide not to buy anything from this site?

 

This is in fact a fake form I put together for this exercise, but it demonstrates exactly how a lot of our members outside the United States felt when we showed them  a US-centric address form.

 

Etsy does business in more than 200 countries, and it’s important that our members feel comfortable and confident when providing their shipping address and placing orders. They need to know that their orders are going to reach them. Furthermore, we ask for several addresses when new sellers join Etsy, like billing and banking addresses, and we want them to feel comfortable providing this information, and confident that we will know how to support them and their needs.

 

In this post I’ll cover:

 

Where we started: Generic Address Forms

When we first started this project, our address forms were designed in a variety of technologies, with minimal localization. Most address forms worked well for US members, and we displayed the appropriate forms for Canadian and Australian members, but members in every other country were shown a generic, unlocalized address form.

United States

Our address form for US members looks just fine – all the fields we expect to see are there, and there aren’t any unexpected fields.

Germany (et al)

 

This is the form we showed for Germany (and most other countries). We asked for a postal code, and a state, province, or region. For someone unfamiliar with German addresses, this might seem fine at first. But what if I told you that German addresses don’t have states? Even worse, in this form, state is a required field!

 

This form confused a lot of our German members, and they ended up putting any number of things in that field, just to be able to move forward. This led us to saved addresses like:

Ets Y. Crafter

123 Main Street

Berlin, Berlin 12435

Germany

In this case, the member just entered the city in the state field. This wasn’t the worst situation, and anything shipped to this address would probably arrive just fine.

But what about this address?

Ets Y. Crafter

123 Main Street

Berlin, California 12435

Germany

Sometimes the member entered a US state in the state field. This confused sellers and postal workers alike – we had real life examples of packages being shipped to the US because a state was included, even though the country listed was something totally different!

Ets Y. Crafter

123 Main Street

Berlin, dasdsaklklg

Germany

Members could even enter gibberish in the state field. Again, this was a bit confusing for sellers and the postal service.

 

What are non-US addresses supposed to look like?

Here’s an example of a German address:

Ets Y. Crafter

123 Main Street

12435 BERLIN

Germany

If you wanted to mail something to this address, you’d need to specify the recipient, the street name and number, the postal code and city, and the country. We could have used this address to determine an address format for Germany, but what about the almost 200 other countries Etsy supports? We didn’t really want look up sample addresses for each country and guess at what the address format should be.

Thankfully, we didn’t have to do that.

We drew on 3 different sources when compiling a list of address formats for different countries.

So what kind of data did we get?

The most important piece of information we got is a format string that tells us:

We also got other formatting data, including:

Here’s what the formatting data looks like for a couple of different countries, and how that data is used to assemble the localized address form for that country.

United States

$format = [
  209 => [
      'format' => '%name\n%first_line\n%second_line\n%city, %state %zip\n%country_name',
      'required_fields' => [
          'name',
          'first_line',
          'city',
          'state',
          'zip',
      ],
      'uppercase_fields' => [
          'city',
          'state',
      ],
      'name' => 'UNITED STATES',
      'administrative_area_type' => 'state',
      'locality_type' => 'city',
      'postal_code_type' => 'zip',
      'postal_code_pattern' => '(\\d{5})(?:[ \\-](\\d{4}))?',
      'administrative_areas' => [
          'AL' => 'Alabama',
          'AK' => 'Alaska',
          ...
          'WI' => 'Wisconsin',
          'WY' => 'Wyoming',
      ],
      'iso_code' => 'US',
  ]
];

 

Germany

$format = [
  91 => [
    'format' => '%name\n%first_line\n%second_line\n%zip %city\n%country_name',
    'required_fields' => [
        'name',
        'first_line',
        'city',
        'zip',
    ],
    'uppercase_fields' => [
        'city',
    ],
    'name' => 'GERMANY',
    'locality_type' => 'city',
    'postal_code_type' => 'postal',
    'postal_code_pattern' => '\\d{5}',
    'input_format' => '%name\\n%first_line\\n%second_line\\n%zip\\n%city\\n%country_name',
    'iso_code' => 'DE',
  ],
];

 

So, now we had all this great information on what addresses were supposed to look like for almost 200 countries. How did we take this data and turn it into a localized address experience?

 

Building a Localized Address Experience

A complete localized address experience requires two components: address input and address display. In other words, our members need to be able to add and edit their addresses using a form that makes sense to them, and they need to see their address displayed in a format that they’re familiar with.

Address Input

You’ve already seen what our unlocalized address form looked like, but here’s a quick reminder of what German members were seeing when they were entering their addresses.

 

This was a static form, meaning we had a big template with a bunch of <input> tags, and a little bit of JavaScript to handle interactions with the form. For a few select countries, like Canada and Australia, we added conditional statements to the template, swapping in different state or province lists as necessary. It made for a pretty messy template.

When deciding how we wanted to handle address forms, we knew that we didn’t want to have a template with enough conditional statements to handle hundreds of different address formats. Instead, we decided on a compositional approach.

Every address form starts with a country <select> input. This prompts the member to select their country first, so we can render the localized form before they start entering their address. We identified all the possible fields that could be in an address form: first_line, second_line, city, state, zip, and country, and recognized that all these fields could be rendered using just a few generic templates. These templates would allow us to specify custom labels, indicate whether or not the field was required, display validation errors, and render other custom content by providing different data when we render the template for each field.   

Text Input

A pretty basic text input can be used for the first line, second line, city, and zip address fields, as well as the state field, depending on the country. Here’s what our text input template looks like:

 

State Select Input

For the countries for which we have state (aka administrative area) data, we created a select input template:

With these templates, and the appropriate address formatting data, we can generate address input forms for almost 200 countries.

 

Address Display

Displaying localized addresses was also handled by a static template before this project. It was based on the US address format, and was written with the assumption that all addresses had the same fields as US addresses. It looked something like this:

<p>{{name}}</p>
<p>{{first_line}}</p>
<p>{{second_line}}</p>
<p>{{city}}, {{state}} {{zip}}</p>
<p>{{country_name}}</p>

While this wasn’t as problematic as the way we were handling address input, it was still not ideal. Addresses for international members would be displayed incorrectly, causing varying levels of confusion.

For German members, the difference wasn’t too bad:

 

But for members in countries like Japan, the difference was pretty significant:

 

To localize address display, we went with a compositional approach again, treating each field as a separate piece, and then combining them in the order specified, and using the delimiters indicated by the format string.

<span class="name">Ets Y. Crafter</span><br>
<span class="first-line">123 Main Street</span><br/>
<span class="zip">12345</span> <span class="city">BERLIN</span><br/>
<span class="country-name">Germany</span><br/>

We further enhanced our address display library by creating a PHP class that could render a localized address in plaintext, or fully customizable HTML, to support the numerous ways addresses are displayed throughout Etsy and our internal tools.

Conclusion

No more confusing address forms! While we’re nowhere near finished with localized addresses, we’ve made really great progress so far. We’re hopeful that our members will enjoy their experience just a little bit more now that they have fewer concerns when it comes to addresses. There is a lot more that we learned from this project (like how we replaced the unlocalized address forms with the localized address form on the entire site!), so keep an eye out for future blog posts. Thanks for reading!

2 Comments

Modeling User Journeys via Semantic Embeddings

Posted by on July 12, 2018 / No Responses

Etsy is a global marketplace for unique goods. This means that as soon as an item becomes popular, it runs the risk of selling out. Machine learning solutions that simply memorize the popular items are not as effective, and crafting features that generalize well across items in our inventory is important. In addition, some content features such as titles are sometimes not as informative for us since these are seller provided, and can be noisy.

In this blog post, I will cover a machine learning technique we are using at Etsy that allows us to extract meaning from our data without the use of content features like titles, modeling only the user journeys across the site. This post assumes understanding of machine learning concepts,  specifically word2vec.

What are embeddings?

Word2vec is a popular method in natural language processing for learning a semi-supervised model from unsupervised data to discover similarity across words in a corpus using an unlabelled body of text. This is done by relating co-occurrence of words and relies on the assumption that words that appear together are more related than words that are far apart.

This same method can be used to model user interactions on Etsy by modeling users journeys in aggregate as a sequence of user actions. Each user session is analogous to a sentence, and each user action (clicking on an item, visiting a shop’s home page, issuing a search query) is analogous to a word in NLP word2vec parlance. This method of modeling interactions allows us to represent items or other entities (shops, locations, users, queries) as low dimensional continuous vectors (semantic embeddings), where the similarity across two different vectors represents their co-relatedness. This method can be used without knowing anything about any particular user.

Semantic embeddings are agnostic to the content of items such as their titles, tags, descriptions, and allow us to leverage aggregate user interactions on the site to extract items that are semantically similar. In addition, they give us the ability to embed our search queries, items, shops, categories, and locations in the same vector space. This leads to better featurization and candidate selection across multiple machine learning problems, and provides compression, which drastically improves inference speeds compared to representing them as one-hot encodings. Modeling user journeys as a sequence of actions gives us information that is different from content-based methods that leverage descriptions and titles of items, and so these methods can be used in conjunction.

We have already productionized use of these embeddings across product recommendations, guided search experiences and they show great promise on our ranking algorithms as well. External to Etsy, similar semantic embeddings have been used to successfully learn representations for delivering ads as product recommendations via email and matching relevant ads to queries at Yahoo; and to improve their search ranking and derive similar listings for recommendations at AirBnB.

Approach

Etsy has over 50 million active items listed on the site from over 2 million sellers, and tens of millions of unique search queries each month. This amounts to billions of tokens (items or user actions – equivalent to word in NLP word2vec) for training. We were able to train embeddings on a single box, but we quickly ran into some limitations when modeling a sequence of user interactions as a naive word2vec model. The output embedding we constructed did not give us satisfactory performance. This gave us further assurance that some extensions to the standard word2vec implementation were necessary, and so extended the model with additional signals that are discussed below.

Skip-gram model and extensions

We initially started training the embeddings as a Skip-gram model with negative sampling (NEG as outlined in the original word2vec paper) method. The Skip-gram model performs better than the Continuous Bag Of Words (CBOW) model for larger vocabularies. It models the context given a target token and attempts to maximize the average likelihood of seeing any of the context tokens given a target token. The negative sampling draws a negative token from the entire corpus with a frequency that is directly proportional to the frequency of the token appearing in the corpus.

Training a Skip-gram model on only randomly selected negatives, however, ignores implicit contextual signals that we have found to be indicative of user preference in other contexts. For example, if a user clicks on the second item for a search query, the user most likely saw, but did not like, the first item that showed up in the search results. We extend the Skip-gram loss function by appending these implicit negative signals to the Skip-gram loss directly.

Similarly, we consider the purchased item in a particular session to be a global contextual token that applies to the entire sequence of user interactions. The intuition behind this is that there are many touch points on the user’s journey that help them come to the final purchase decision, and so we want to share the purchase intent across all the different actions that they took. This is also referred to as the linear multi-touch attribution model.

In addition, we want to be able to give a user journey that ended in a purchase more importance in the model. We define an importance weight per user interaction (click, dwell, add to cart, and purchase) and incorporate this to our loss function as well.

The details of how we extended Skip-gram are outside the scope of this post but can be found in detail in the Scalable Semantic Matching paper.

Training

We aim to learn a vector representation for each unique token, where a token can be listing id, shop id, query, category, or anything else that is part of a user’s  interaction. We were able to train embeddings up to 100 dimensions on a single box. Our final models take in billions of tokens and are able to produce embeddings for tens of millions of unique tokens.

User action can be broadly defined to any sort of explicit or implicit engagement of the user with the product. We extract user interactions from multiple sources such as the search, category, market, and shop home pages, where these interactions are aggregated and not tied to a particular user.

The model performed significantly better when we thresholded tokens based on their type. For example, the frequency count and distribution for queries tend to be very different from that of items, or shops. User queries are unbounded and have a very long tail, and order of magnitudes larger than the number of shops. So we want to capture all the shops in the embeddings vector space whereas limit queries or items based on a cutoff.

We also found a significant improvement in performance by training the model on the past year’s data for the current and upcoming month to add some forecasting capabilities, eg. for a model serving production in the month of December, last month December and January data was added, so our model would see more interactions related to Christmas during this time.

Training application specific models gave us better performance. For example, if we are interested in capturing shop level embeddings, training on the shops for an item instead of just the items directly yields better performance than averaging the embeddings for all items from a particular shop. We are actively experimenting with these models and plan to incorporate user and session specific data in the future.

Results

These are some interesting highlights of what the semantic embeddings are able to capture:

Note that all these relations are created without the model being fed any content features. These are results of the embeddings filtered to just search queries and projected onto tensorboard.

This first set of query similarities captures many different animals for the query jaguar.  The second set  shows the model also able to relate across different languages. Montre is watch in French, armbanduhr is wristwatch in German, horloge is clock in French, orologio da polso is wristwatch in Italian, uhren is again watch in German, and relog in Spanish.


Estate pipe signifies tobacco pipes that are previously owned. Here, we find the the model able to identity different items the pipe is made from (briar, corn cob, meerschaum), different brands of manufacturers (Dunhill and Peterson), and identifies accessories that are relevant to this particular type of pipe (pipe tamper) while not showing correlation with glass pipes that are not valid in this context. Content based methods have not been very effective in dealing with this. The embeddings are able to capture different styles, with boho, gypsy, hippie, gypsysoul all being related styles to bohemian.  

 

We found semantic embeddings to also provide better similar items to a particular item compared to a candidate set generation model that is based on content. This example comes from a model we released recently to generate similar items across shops.

For an item that is a cookie of steer and cacti design, we see the previous method latch onto content from the term ‘steer’ and ignore ‘cactus’, whereas the semantic embeddings place significance on cookies. We find that this has the advantage of not having to guess the importance of a particular item, and just rely on user engagement to guide us.

 

 

 

 

 

 

 

 

 

 

 

These candidates are generated based on a k-nn search across the semantic representations of items. We were able to run state of the art recall algorithms, unconstrained by memory on our training boxes themselves.

We are excited about the variety of different applications of this model ranging from personalization to ranking to candidate set selection. Stay tuned!

This work is a collaboration between Xiaoting Zhao and Nishan Subedi from the Search Ranking team. We would like to thank our manager, Liangjie Hong for insightful discussions and support, the Recommendation Systems and Search Ranking teams for their input during the project, specially Raphael Louca and Adam Henderson for launching products based on models, Stan Rozenraukh, Allison McKnight and Mohit Nayyar for reviewing this post, and Mihajlo Grbovic, leading author of the semantic embeddings paper for detailed responses to our questions.

No Comments

Deploying to Google Kubernetes Engine

Posted by on June 5, 2018 / 2 Comments

Late last year, Etsy announced that we’ll be migrating our services out of self-managed data centers and into the cloud. We selected Google Cloud Platform (GCP) as our cloud provider and have been working diligently to migrate our services. Safely and securely migrating services to the cloud requires them to live in two places at once (on-premises and in the cloud) for some period of time.

In this article, I’ll describe our strategy specifically for deploying to a pair of Kubernetes clusters: one running in the Google Kubernetes Engine (GKE) and the other on-premises in our data center. We’ll see how Etsy uses Jenkins to do secure Kubernetes deploys using authentication tokens and GCP service accounts. We’ll learn about the challenge of granting fine-grained GKE access to your service accounts and how Etsy solves this problem using Terraform and Helm.

Deploying to On-Premises Kubernetes

Etsy, while new to the Google Cloud Platform, is no stranger to Kubernetes. We have been running our own Kubernetes cluster inside our data center for well over a year now, so we already have a partial solution for deploying to GKE, given that we have a system for deploying to our on-premises Kubernetes.

Our existing deployment system is quite simple from the perspective of the developer currently trying to deploy: simply open up Deployinator and press a series of buttons! Each button is labeled with its associated deploy action, such as “build and test” or “deploy to staging environment.”

Under the hood, each button is performing some action, such as calling out to a bash script or kicking off a Jenkins integration test, or some combination of several such actions.

For example, the Kubernetes portion of a Search deploy calls out to a Jenkins pipeline, which subsequently calls out to a bash script to perform a series of “docker build”, “docker tag”, “docker push”, and “kubectl apply” steps.

Why Jenkins, then? Couldn’t we perform the docker/kubectl actions directly from Deployinator?

The key is in… the keys! In order to deploy to our on-premises Kubernetes cluster, we need a secret access token. We load the token into Jenkins as a “credential” such that it is stored securely (not visible to Jenkins users), but we can easily access it from inside Jenkins code.

Now, deploying to Kubernetes is a simple matter of looking up our secret token via Jenkins credentials and overriding the “kubectl” command to always use the token.

Our Jenkinsfile for deploying search services looks something like this:

All of the deploy.sh scripts above use environment variable $KUBECTL in place of standard calls to kubectl, and so by wrapping everything in our withKubernetesEnvs closure, we have ensured that all kubectl actions are using our secret token to authenticate with Kubernetes.

Declarative Infrastructure via Terraform

Deploying to GKE is a little different than deploying to our on-premises Kubernetes cluster and one of the major reasons is our requirement that everything in GCP be provisioned via Terraform. We want to be able to declare each GCP project and all its resources in one place so that it is automatable and reproducible. We want it to be easy—almost trivial—to recreate our entire GCP setup again from scratch. Terraform allows us to do just that.

We use Terraform to declare every possible aspect of our GCP infrastructure. Keyword: possible. While Terraform can create our GKE clusters for us, it cannot (currently) create certain types of resources inside of those clusters. This includes Kubernetes resources which might be considered fundamental parts of the cluster’s infrastructure, such as roles and rolebindings.

Access Control via Service Accounts

Among the objects that are currently Terraformable: GCP service accounts! A service account is a special type of Google account which can be granted permissions like any other user, but is not mapped to an actual user. We typically use these “robot accounts” to grant permissions to a service so that it doesn’t have to run as any particular user (or as root!).

At Etsy, we already have “robot deployer” user accounts for building and deploying services to our data center. Now we need a GCP service account which can act in the same capacity.

Unfortunately, GCP service accounts only (currently) provide us with the ability to grant complete read/write access to all GKE clusters within the same project. We’d like to avoid that! We want to grant our deployer only the permissions that it needs to perform the deploy to a single cluster. For example, a deployer doesn’t need the ability to delete Kubernetes services—only to create or update them.

Kubernetes provides the ability to grant more fine-grained permissions via role-based access control (RBAC). But how do we grant that kind of permission to a GCP service account?

We start by giving the service account very minimal read-only access to the cluster. The service account section of the Terraform configuration for the search cluster looks like this:

We have now created a service account with read-only access to the GKE cluster. Now how do we associate it with the more advanced RBAC inside GKE? We need some way to grant additional permissions to our deployer by using a RoleBinding to associate the service account with a specific Role or ClusterRole.

Solving RBAC with Helm

While Terraform can’t (yet) create the RBAC Kubernetes objects inside our GKE cluster, it can be configured to call a script (either locally or remotely) after a resource is created.

Problem solved! We can have Terraform create our GKE cluster and the minimal deployer service account, then simply call a bash script which creates all the Namespaces, ClusterRoles, and RoleBindings we need inside that cluster. We can bind a role using the service account’s email address, thus mapping the service account to the desired GKE RBAC role.

However, as Etsy has multiple GKE clusters which all require very similar sets of objects to be created, I think we can do better. In particular, each cluster will require service accounts with various types of roles, such as “cluster admin” or “deployer”. If we want to add or remove a permission from the deployer accounts across all clusters, we’d prefer to do so by making the change in one place, rather than modifying multiple scripts for each cluster.

Good news: there is already a powerful open source tool for templating Kubernetes objects! Helm is a project designed to manage configured packages of Kubernetes resources called “charts”.

We created a Helm chart and wrote templates for all of the common resources that we need inside GKE. For each GKE cluster, we have a yaml file which declares the specific configuration for that cluster using the Helm chart’s templates.

For example, here is the yaml configuration file for our production search cluster:

And here are the templates for some of the resources used by the search cluster, as declared in the yaml file above (or by nested references inside other templates)…

When we are ready to apply a change to the Helm chart—or Terraform is applying the chart to an updated GKE cluster—the script which applies the configuration to the GKE cluster does a simple “helm upgrade” to apply the new template values (and only the new values! Helm won’t do anything where it detects that no changes are needed).

Integrating our New System into the Pipeline

Now that we have created a service account which has exactly the permissions we require to deploy to GKE, we only have to make a few simple changes to our Jenkinsfile in order to put our new system to use.

Recall that we had previously wrapped all our on-premises Kubernetes deployment scripts in a closure which ensured that all kubectl commands use our on-premises cluster token. For GKE, we use the same closure-wrapping style, but instead of overriding kubectl to use a token, we give it a special kube config which has been authenticated with the GKE cluster using our new deployer service account. As with our secret on-premises cluster token, we can store our GCP service account key in Jenkins as a credential and then access it using Jenkins’ withCredentials function.

Here is our modified Jenkinsfile for deploying search services:

And there you have it, folks! A Jenkins deployment pipeline which can simultaneously deploy services to our on-premises Kubernetes cluster and to our new GKE cluster by associating a GCP service account with GKE RBAC roles.

Migrating a service from on-premises Kubernetes to GKE is now (in simple cases) as easy as shuffling a few lines in the Jenkinsfile. Typically we would deploy the service to both clusters for a period of time and send a percentage of traffic to the new GKE version of the service under an A/B test. After concluding that the new service is good and stable, we can stop deploying it on-premises, although it’s trivial to switch back in an emergency.

Best of all: absolutely nothing has changed from the perspective of the average developer looking to deploy their code. The new logic for deploying to GKE remains hidden behind the Deployinator UI and they press the same series of buttons as always.

Thanks to Ben Burry, Jim Gedarovich, and Mike Adler who formulated and developed the Helm-RBAC solution with me.

2 Comments

The EventHorizon Saga

Posted by on May 29, 2018 / No Responses

This is an epic tale of EventHorizon, and how we finally got it to a happy place.

EventHorizon is a tool we use to watch events streaming into our system. Events (also known as beacons) are basically clickstream data—a record of actions visitors take on our site, what content they saw, what experiments they were bucketed into, etc.

Events are sent primarily from our web & API servers (backend events) and web browsers (frontend events), and logged in Kafka. EventHorizon is primarily used as a debugging tool, to make sure that a new event you’ve added is working as expected, but also serves to monitor the health of our event system.

Screenshot of the EventHorizon web page, showing events streaming in

EventHorizon UI

EventHorizon is pretty simple; it’s only around 200 lines of Go code. It consumes messages from our main event topic on Kafka (“beacon-main”) and forwards them via WebSockets to any connected browsers. Ideally, the time between when an event is fired on the web site and when it appears in the EventHorizon UI is on the order of milliseconds.

EventHorizon has been around for years, and in the early days, everything was hunky-dory. But then, the lagging started.

Nobody Likes a Laggard

As with most services at Etsy, we have lots of metrics we monitor for EventHorizon. One of the critical ones is consumer lag, which is the age of the last message we’ve read from the beacon-main Kafka topic. This would normally be milliseconds, but occasionally it would start lagging into minutes or even hours.

Graph showing EventHorizon consumer lag increasing from zero to over 1.7 hours, then back down again

EventHorizon Consumer Lag

Sometimes it would recover on its own, but if not, restarting EventHorizon would often fix the problem, but only temporarily. Within anywhere from a few hours to a few weeks we’d notice lag time beginning to grow again.

We spent a lot of time pouring over EventHorizon’s code, thinking we had a bug somewhere. It makes use of Go’s channels—perhaps there was a subtle concurrency bug that was causing a deadlock? We fiddled with that, but it didn’t help.

We noticed that we could sometimes trigger the lag if two users connected to EventHorizon at the same time. This clue led us to think that there was a bug somewhere in the code that sent events to the browsers. Something with Websockets? We considered rewriting it to use Server-sent Events, but never got around to that.

We also wondered if the sheer quantity of events we were sending to browsers was causing the problem. We updated EventHorizon to only send events from Etsy employees to browsers in production. Alas, the lag didn’t go away—although seemed to have gotten a little better.

We eventually moved onto other things. We set up a Nagios alert for when EventHorizon started lagging, and just restarted it when it got bad. Since it would often be fine for 2-3 weeks before lagging, spending more time trying to fix it wasn’t a top priority.

Orloj Joins the Fray

In September 2017 EventHorizon lag had gotten really bad. We would restart it and it would just start lagging again immediately. At some point we even turned off the Nagios alert.

However, another system, Orloj (pronounced “OR-loy”, named after the Prague astronomical clock), had started experiencing lag as well. Orloj is another Kafka consumer, responsible for updating the Recently Viewed Listings that are shown on the homepage when are you are signed in. As Orloj is a production system, figuring out what was happening became much more urgent.

Orloj’s lag was a little different: lag would spike once an hour, whenever the Hadoop job that pulls beacons down from Kafka into HDFS ran, and at certain times of the day it would be quite significant.

Graph showing the Orloj service lagging periodically

Orloj Periodic Lag

It turned out that due to a misconfiguration, KafkaPullJob, which was only supposed to launch one mapper per Kafka partition (of which we have 144 for beacon-main), was actually launching 400 mappers, which was swamping the network. We fixed this, and Orloj was happy again.

For about a week.

Trouble with NICs

Orloj continued to have issues with lag. While digging into this, I realized that the machines in the Kafka clusters only had 1G network interfaces (NICs), whereas 10G NICs were standard in most of our infrastructure. I talked to our networking operations team to ask about upgrading the cluster and one of the engineers asked what was going on with one particular machine, kafkautil01. The network graph showed that its bandwidth was pegged at 100%, and had been for a while. kafkautil01 also had a 1G NIC. And that’s where EventHorizon ran.

A light bulb exploded over my head.

Relaying this info to Kevin Gessner, the engineer who wrote Orloj, he said “Oh yeah, consuming beacon-main requires at least 1.5 Gbps.” Suddenly it all made sense.

Beacon traffic fluctuates in proportion to Etsy site traffic, which is cyclical. Parts of the day were under 1 Gbps, parts over, and when it went over, EventHorizon couldn’t keep up and would start lagging. And we were going over more and more often as Etsy grew.

And remember the bit about two browsers connecting at once triggering lag? With EventHorizon forwarding the firehose of events to each browser, that was also a good way to push the network bandwidth over 1 Gbps, triggering lag.

We upgraded the Kafka clusters and the KafkaUtil boxes to 10G NICs and everything was fixed. No more lag!

Ha, just kidding.

Exploding Events

We did think it was fixed for a while, but EventHorizon and Orloj would both occasionally lag a bit, and it seemed to be happening more frequently.

While digging into the continuing lag, we discovered that the size of events had grown considerably. Looking at a graph of event sizes, there was a noticeable uptick around the end of August.

Graph showing the average size of event beacons increasing from around 5K to 7K in the period of a couple months

Event Beacon Size Increase

This tied into problems we were having with capacity in our Hadoop cluster—larger events mean longer processing times for nearly every job.

Inspecting event sizes showed some clear standouts. Four search events were responsible for a significant portion of all event bandwidth. The events were on the order of 50KB each, about 10x the size of a “normal” event. The culprit was some debugging information that had been added to the events.

The problem was compounded by something that has been part of our event pipeline since the beginning: we generate complementary frontend events for each backend primary event (a “primary event” is akin to a page view) to capture browser-specific data that is only available on the frontend, and we do it by first making a copy of the entire event and then adding the frontend attributes. Later, when we added events for tracking page performance metrics, we did the same thing. These complementary events don’t need all the custom attributes of the original, so this is a lot of wasted bandwidth. So we stopped doing that.

Between slimming down the search events, not copying attributes unnecessarily, and finding a few more events that could be trimmed down, we managed to bring down the average event size, as well as event volume, considerably.

Graph showing the event beacon message size dropping from 7K to 3.5K in the space of a week

Event Beacon Size Decrease

Nevertheless, the lag persisted.

The Mysterious Partition 20

Orloj was still having problems, but this time it was a little different. The lag seemed to be happening only on a single partition, 20. We looked to see if the broker that was the leader for that partition was having any problems, and couldn’t see anything. We did see that it was serving a bit more traffic than other brokers, though.

The first thing that came to mind was a hot key. Beacons are partitioned by a randomly-generated string called a “browser_id” that is unique to a client (browser, native device, etc.) hitting our site. If there’s no browser_id, as is the case with internal API calls, it gets assigned to a random partition.

I used a command-line Kafka consumer to try to diagnose. It has an option for only reading from a single partition. Here I sampled 100,000 events from partitions 20 and 19:

Partition 20

$ go run cmds/consumer/consumer.go -ini-files config.ini,config-prod.ini -topic beacon-main -partition 20 -value-only -max 100000 | jq -r '[.browser_id[0:6],.user_agent] | @tsv' | sort | uniq -c | sort -rn | head -5
    558 orcIq5  Dalvik/2.1.0 (Linux; U; Android 7.0; SAMSUNG-SM-G935A Build/NRD90M) Mobile/1 EtsyInc/4.77.0 Android/1
    540 null    Api_Client_V3/Bespoke_Member_Neu_Orchestrator
    400 ArDkKf  Dalvik/2.1.0 (Linux; U; Android 8.0.0; Pixel XL Build/OPR3.170623.008) Mobile/1 EtsyInc/4.78.1 Android/1
    367 hK8GHc  Dalvik/2.1.0 (Linux; U; Android 7.0; SM-G950U Build/NRD90M) Mobile/1 EtsyInc/4.75.0 Android/1
    366 EYuogd  Dalvik/2.1.0 (Linux; U; Android 7.0; SM-G930V Build/NRD90M) Mobile/1 EtsyInc/4.77.0 Android/1

Partition 19

$ go run cmds/consumer/consumer.go -ini-files config.ini,config-prod.ini -topic beacon-main -partition 19 -value-only -max 100000 | jq -r '[.browser_id[0:6],.user_agent] | @tsv' | sort | uniq -c | sort -rn | head -5
    570 null    Api_Client_V3/Bespoke_Member_Neu_Orchestrator
    506 SkHj7N  Dalvik/2.1.0 (Linux; U; Android 7.0; LG-LS993 Build/NRD90U) Mobile/1 EtsyInc/4.78.1 Android/1
    421 Jc36zw  Dalvik/2.1.0 (Linux; U; Android 7.0; SM-G930V Build/NRD90M) Mobile/1 EtsyInc/4.78.1 Android/1
    390 A586SI  Dalvik/2.1.0 (Linux; U; Android 8.0.0; Pixel Build/OPR3.170623.008) Mobile/1 EtsyInc/4.78.1 Android/1
    385 _rD1Uj  Dalvik/2.1.0 (Linux; U; Android 7.0; SM-G935P Build/NRD90M) Mobile/1 EtsyInc/4.77.0 Android/1

I couldn’t see any pattern, but did notice we were getting a lot of events from the API with a null browser_id. These appeared to be distributed evenly across partitions, though.

We were seeing odd drops and spikes in the number of events going to partition 20, so I thought I’d see if I could just dump events around that time, so I started digging into our beacon consumer command-line tool to try to do that. In this process, I came across the big discovery: the -partition flag I had been relying on wasn’t actually hooked up to anything. So I was never consuming from a particular partition, but from all partitions. Once I fixed this, the problem was obvious:

Partition 20

$ go run cmds/consumer/consumer.go -ini-files config.ini,config-prod.ini -topic beacon-main -q -value-only -max 10000 -partition 20 | jq -r '[.browser_id[0:6],.user_agent] | @tsv' | sort | uniq -c | sort -nr | head -5
   8268 null    Api_Client_V3/Bespoke_Member_Neu_Orchestrator
    335 null    Api_Client_V3/BespokeEtsyApps_Public_Listings_Offerings_FindByVariations
    137 B70AD9  Mozilla/5.0 (iPhone; CPU iPhone OS 11_1_1 like Mac OS X) AppleWebKit/604.3.5 (KHTML, like Gecko) Mobile/15B150 EtsyInc/4.78 rv:47800.37.0
     95 C23BB0  Mozilla/5.0 (iPhone; CPU iPhone OS 10_3_3 like Mac OS X) AppleWebKit/603.3.8 (KHTML, like Gecko) Mobile/14G60 EtsyInc/4.78 rv:47800.37.0
     83 null    Api_Client_V3/Member_Carts_ApplyBestPromotions

and another partition for comparison:

$ go run cmds/consumer/consumer.go -ini-files config.ini,config-prod.ini -topic beacon-main -q -value-only -max 10000 -partition 0 | jq -r '[.browser_id[0:6],.user_agent] | @tsv' | sort | uniq -c | sort -nr | head -5
   1074 dtdTyz  Dalvik/2.1.0 (Linux; U; Android 7.0; VS987 Build/NRD90U) Mobile/1 EtsyInc/4.78.1 Android/1
    858 gFXUcb  Dalvik/2.1.0 (Linux; U; Android 7.0; XT1585 Build/NCK25.118-10.2) Mobile/1 EtsyInc/4.78.1 Android/1
    281 C380E3  Mozilla/5.0 (iPhone; CPU iPhone OS 11_0_3 like Mac OS X) AppleWebKit/604.1.38 (KHTML, like Gecko) Mobile/15A432 EtsyInc/4.77 rv:47700.64.0
    245 E0464A  Mozilla/5.0 (iPhone; CPU iPhone OS 11_0_3 like Mac OS X) AppleWebKit/604.1.38 (KHTML, like Gecko) Mobile/15A432 EtsyInc/4.78 rv:47800.37.0
    235 BAA599  Mozilla/5.0 (iPhone; CPU iPhone OS 11_0_3 like Mac OS X) AppleWebKit/604.1.38 (KHTML, like Gecko) Mobile/15A432 EtsyInc/4.78 rv:47800.37.0

All the null browser_ids were going to partition 20. But how could this be? They’re supposed to be random.

I bet a number of you are slapping your foreheads right now, just like I did. I wrote this test of the hashing algorithm: https://play.golang.org/p/MJpmoPATvO

Yes, the browser_ids I was thinking were null were actually the string “null”, which is what was getting sent in the JSON event. I put in a fix for this, and:

Graph showing the number of events per second being processed by Orloj by partition, with partition 20 dropping significantly

Orloj Partition 20 Baumgartner

Lessons

I’m not going to attempt to draw any deep conclusions from this sordid affair, but I’ll close with some advice for when you find yourself with a persistent confounding problem like this one.

Graph everything. Then add some more graphs. Think about what new metrics might be helpful in diagnosing your issue. Kevin added the above graph showing Orloj events/sec by partition, which was critical to realizing there was a hot key issue.

If something makes no sense, think about what assumptions you’ve been making in diagnosing it, and check those assumptions. Kevin’s graph didn’t line up with what I was seeing, so I dug deeper into the command-line consumer and found the problem with the -partition flag.

Talk to people. Almost every small victory along the way came after talking with someone about the problem, and getting some crucial insight and perspective I was missing.

Keep at it. As much as it can seem otherwise at times, computers are deterministic, and with persistence, smart colleagues, and maybe a bit of luck, you’ll figure it out.

Epilogue

On November 13 Cloudflare published a blog post on tuning garbage collection in Go. EventHorizon and Orloj both spent a considerable percentage of time (nearly 10%) doing garbage collection. By upping the GC threshold for both, we saw a massive performance improvement:

Graph showing the GC pause time per sec dropping from 75+ ms to 592 µs

Graph showing the message processing time dropping significantly

Graph showing the EventHorizon consumer lag dropping from 25-125ms to near zero

Except for a couple brief spikes during our last Kafka upgrade, EventHorizon hasn’t lagged for more than a second since that change, and the current average lag is 2 ms.

Thanks to Doug Hudson, Kevin Gessner, Patrick Cousins, Rebecca Sliter, Russ Taylor, and Sarah Marx for their feedback on this post. You can follow me on Twitter at @bgreenlee.

No Comments

Sealed classes opened my mind

Posted by on April 12, 2018 / 3 Comments

How we use Kotlin to tame state at Etsy

Bubbly the Baby Seal by SmartappleCreations

Bubbly the Baby Seal by SmartappleCreations

Etsy’s apps have a lot of state.  Listings, tags, attributes, materials, variations, production partners, shipping profiles, payment methods, conversations, messages, snippets, lions, tigers, and even stuffed bears.  

All of these data points form a matrix of possibilities that we need to track.  If we get things wrong, then our buyers might fail to find a cat bed they wanted for Whiskers.  And that’s a world I don’t want to live in.

There are numerous techniques out there that help manage state.  Some are quite good, but none of them fundamentally changed our world quite like Kotlin’s sealed classes.  As a bonus, most any state management architecture can leverage the safety of Kotlin’s sealed classes.

What are sealed classes?

The simplest definition for sealed class is that they are “like enums with actual types.”  Inspired by Scala’s sealed types, Kotlin’s official documentation describes them thus:

Sealed classes are used for representing restricted class hierarchies, when a value can have one of the types from a limited set, but cannot have any other type. They are, in a sense, an extension of enum classes: the set of values for an enum type is also restricted, but each enum constant exists only as a single instance, whereas a subclass of a sealed class can have multiple instances which can contain state.

A precise, if a bit wordy, definition.  Most importantly: they are restricted hierarchies, they represent a set, they are types, and they can contain values.  

Let’s say we have a class Result to represent the value returned from a network call to an API.  In our simplified example, we have two possibilities: a list of strings, or an error.

 

Now we can use Kotlin’s when expression to handle the result. The when expression is a bit like pattern matching. While not as powerful as pattern matching in some other languages, Kotlin’s when expression is one of the language’s most important features.  

Let’s try parsing the result based on whether it is was a success or an error.

 

We’ve done a lot here in a just a few lines so let’s go over each aspect.  First we were able to check the type of result using Kotlin’s is operator, which is equivalent to Java’s instanceof operator.  By checking the type, Kotlin was able to smartcast the value of result for us for each case.  So if result is a success, we can access the value as if it is typed Result.Success.  Now we can can pass items without any type casting to showItems(items: List<String>).  If the result was an error, just print the stack trace to the console.  

The next thing we did with the when expression is to exhaust all the possibilities for the Result sealed class type.  Typically a when expression must have an else clause.  However, in the above example, there are no other possible types for Result, so the compiler, and IDE, know we don’t need an else clause.  This is important as it is exactly the kind of safety we’ve been longing for.  Look at what happens when we add a Cancelled type to the sealed class:

 

Now the IDE would show an error on our when statement because we haven’t handled the Cancelled branch of Result.

‘when’ expression must be exhaustive, add necessary ‘is Cancelled’ branch or ‘else’ branch instead.

 

The IDE knows we didn’t cover all our bases here.  It even knows which possible types result could be based on the Result sealed class.  Helpfully, the IDE offers us a quickfix to add the missing branches.  

There is one subtle difference here though.  Notice we assigned the when expression to a val. The requirement that a when expression must be exhaustive only kicks in if you use when as a value, a return type, or call a function on it.  This is important to watch out for as it can be a bit of a gotcha.  If you want to utilize the power of sealed classes and when, be sure to use the when expression in some way.

So that’s the basic power of sealed classes, but let’s dig a little deeper and see what else they can do.  

Weaving state

Let’s take a new example of a sealed class Yarn.

 

Then let’s create a new class call Merino and have it extend from Wool

 

What would be the effect of this on our when expression if we were branching on Yarn?  Have we actually created a new possibility that we must have a branch to accommodate?  In this case we have not, because a Merino is still considered part of Wool.  There are still only two branches for Yarn:

 

So that didn’t really help us represent the hierarchy of wool properly.  But there is something else we can do instead. Let’s expand the example of Wool to include Merino and Alpaca types of wool and make Wool into a sealed class.

 

Now our when expression will see each Wool subclass as unique branches that must be exhausted.

 

Only wool socks please

As our state grows in complexity however, there are times in which we may only be concerned about a subset of that state.  In Android it is common to have a custom view that perhaps only cares about the loading state. What can we do about that? If we use the when expression on Yarn, don’t we need to handle all cases?  Luckily Kotlin’s stdlib comes to the rescue with a helpful extension function.  

Say we are processing a sequence of events. Let’s create a fake sequence of all our possible states.

 

Now let’s say that we’re getting ready to knit a really warm sweater. Maybe our KnitView is only interested in the Wool.Merino and Wool.Alpaca states. How do we handle only those branches in our when expression?  Kotlin’s Iterable extension function filterIsInstance to the rescue!  Thankfully we can filter the tree with a simple single line of code.

 

And now, like magic, our when expression only needs to handle Wool states.  So now if we want to iterate through the sequence we can simply write the following:

 

Meanwhile at Etsy

Like a lot of apps these days we support letting our buyers and sellers login via Facebook or Google in addition to email and password.  To help protect the security of our buyers and sellers, we also offer the option of Two Factor Authentication. Adding additional complexity, a lot of these steps have to pause and wait for user input.  Let’s look at the diagram of the flow:

So how do we model this with sealed classes?  There are 3 places where we make a call out to the network/API and await a result.  This is a logical place to start. We can model the responses as sealed classes.

First, we reach out to the server for the social sign in request itself:

 

Next, is the request to actually sign in:

 

Finally, we have the two factor authentication for those that have enabled it:

 

This is a good start but let’s look at what we have here.  The first thing we notice is that some of the states are the same.  This is when we must resist our urges and instincts to combine them.  While they look the same, they represent discrete states that a user can be in at different times. It’s safer, and more correct to think of each state as different.  We also want to prefer duplication over the wrong abstraction.

The next thing we notice here is that these states are actually all connected. In some ways we’re starting to approach a Finite State Machine (FSM).  While an FSM here might make sense, what we’ve really done is to define a very readable, safe way of modeling the state and that model could be used with any type of architecture we want. 

The above sealed classes are logical and match our 3 distinct API requests — ultimately however, this is an imperfect representation.  In reality, there should be multiple instances of SignInResult for each branch of  SocialSignInResult,  and multiple instances of TwoFactorResult for each branch of SocialSignInResult.

For us three steps proved to be the right level of abstraction for the refactoring effort we were undertaking at the time.  In the future we may very well connect each and every branch in a single sealed class hierarchy. Instead of three separate classes, we’d use a single class that represented the complete set of possible states.  

 

Let’s take a look at what that would have looked like:

Note: to keep things simple I omitted the parameters  and used only regular subclasses

 

Our sealed class is now beginning to look a bit more busy, but we have successfully modeled all the possible states for an Etsy buyer or seller during login.  And now we have a discrete set of types to represent each state, and a type safe way to filter them using Iterable.filterIsInstance<Type>(iterable).

This is pretty powerful stuff.  As you can imagine we could connect a custom view to a stream of events that filtered only on 2FA states.  Or we could connect a progress “spinner” that would hide itself on some other subset of the states.

These ways of representing state opened up a clean way to react with business logic or changes on the UI.  Moreover, we’ve done it in a type-safe, expressive, and fluent way.

So with a little luck, now you can login and get that cat bed for Whiskers!

Special thanks to Cameron Ketcham who wrote most of this code!

3 Comments

Culture of Quality: Measuring Code Coverage at Etsy

Posted by on February 15, 2018 / 5 Comments

In the summer of 2017, Etsy created the Test Engineering Strategy Team to define, measure, and develop practices that would help product engineering teams ship quality code. One of our first goals was to find a way to establish a quantifiable quality “baseline” and track our progress against this baseline over time. We settled on code coverage because it provides both a single percentage number as well as line-by-line detailed information for engineers to act on. With over 30,000 unit tests in our test suite, this became a difficult challenge.

Code coverage is the measure of how much code is being executed by your test suite. This blog post will walk you through the process of implementing coverage across our various platforms. We are not going to weigh in on the debate over the value of code coverage in this blog post.

We approached measuring code coverage with 5 requirements:

  1. Generate code coverage metrics by file for PHP, Android, and iOS code bases
  2. Where possible, use existing tools to minimize impact on existing test and deploy infrastructure
  3. Create an automated and reliable hands-off process for measurement and reporting
  4. Surface an accurate measurement as a KPI for code owners
  5. Promote wide dissemination of the results to increase awareness

Setup and Setbacks

Our first step was collecting information on which tools were already being utilized in our deployment pipeline. A surprise discovery was that Android code coverage had already been implemented in our CI pipeline using JaCoCo, but had been disabled earlier in the year because it was impacting deploy times. We re-enabled the Jenkins job and set it to run on a weekly schedule instead of each deploy to the master trunk. Impact on deployment would be a recurring theme throughout coverage implementation.

There was some early concern about the accuracy of the coverage report generated by JaCoCo. We noticed discrepancies in the reports when viewed through the Jenkins JaCoCo plugin and the code coverage window in Android Studio. We already have a weekly “Testing Workshop” scheduled where these kinds of problems are discussed. Our solution was to sit down with engineers during the workshops and review the coverage reports for differences. A thorough review found no flaws in the reports, so we moved forward with relying on JaCoCo as our coverage generation tool.

For iOS, xcodebuild has built-in code coverage measurement. Setting up xcodebuild’s code coverage is a one-step process of enabling coverage in the target scheme using Xcode. This provides an immediate benefit to our engineers, as Xcode supplies instant feedback in the editor on which code is covered by tests. We measured the performance hit that our unit test build times would take from enabling coverage in CI and determined that it would be negligible. The jobs have a high variance build time with a mean(μ) swing of about 40 seconds. When jobs with coverage enabled were added to the sample, it did not have a noticeable effect on the build time trend. The trend is continually monitored and will be adjusted if the impact becomes detrimental to our deployment process.

Our PHP unit tests were being executed by PHPUnit, an industry standard test runner, which includes an option to run tests with coverage. This was a great start. Running the tests with coverage required us to enable Xdebug, which can severely affect the performance of PHP. The solution was to employ an existing set of containers created by our Test Deployment Infrastructure team for smaller tasks (updating dashboards, running build metric crons, etc.). By enabling Xdebug on this small subset of containers we could run coverage without affecting the entire deployment suite. After this setup was completed, we attempted to run our unit tests with coverage on a sample of tests. We quickly found that executing tests with coverage would utilize a lot of RAM, which would cause the jobs to fail. Our initial attempt to solve the problem was to modify the “memory_limit” directive in the PHP.ini file to grant more RAM to the process. We were allowing up to 1024mb of RAM for the process, but this was still unsuccessful. Our eventual solution was to prepend our shell execution with php -d "memory_limit=-1" to free up all available memory for the process.

Even after giving the process all of the memory available to the container, we were still running into job failures. Checking coverage for ~30,000 tests in a single execution was too problematic. We needed a way to break up our coverage tests into multiple executions.

Our PHP unit tests are organized by directory (cart, checkout, etc.). In order to break up the coverage job, we wrote a simple shell script that iterates over each directory and creates a coverage report.

for dir in */ ; do
if [[ -d "$dir" && ! -L "$dir" ]]; then
Coverage Command $dir;
fi;
done

Once we had the individual coverage reports, we needed a way to merge them. Thankfully, there is a second program called PHPCov that will combine reports generated by PHPunit. A job of this size can take upwards of four hours in total, so checking coverage on every deploy was out of the question. We set the job to run on a cron alongside our Android and iOS coverage jobs, establishing a pattern of checking code coverage on Sunday nights.

Make It Accessible

After we started gathering code coverage metrics, we needed to find a way to share this data with the owners of the code. For PHP, the combined report generated by PHPCov is a hefty 100mb XML file in the Clover format. We use a script written in PHP to parse the XML into an array and then output that array as a CSV file. With that done, we copy the data to our Vertica data warehouse for easy consumption by our engineering teams.

For iOS, our first solution was to use a ruby gem called Slather to generate HTML reports. We followed this path for many weeks and implemented automated Slather reporting using Jenkins. Initially this seemed like a viable solution, but when reviewed by our engineers we found some discrepancies between the coverage report generated by Slather and the coverage report generated by Xcode. We had to go back to the drawing board and find another solution. We found another ruby gem, xcov, that directly analyzed the .xccoverage file generated by xcodebuild when running unit tests. We could parse the results of this coverage report into a JSON data object, then convert it into CSV and upload it to Vertica.

For Android, the JaCoCo gradle plugin is supposed to be able to generate reports in multiple formats, including CSV. However, we could not find a working solution for generating reports using the gradle plugin. We spent a considerable amount of time debugging this problem and eventually realized that we were yak shaving. We discarded base assumptions and looked for other solutions. Eventually we decided to use the built-in REST api provided by Jenkins. We created a downstream job, passed it the build number for the JaCoCo report, then used a simple wget command to retrieve and save the JSON response. This was converted into a CSV file and (once again) uploaded to Vertica.

Once we had the coverage data flowing into Vertica we wanted to get it into the hands of our engineering teams. We used Superbit, our in-house business intelligence reporting tool, to make template queries and dashboards that provided examples of how to surface relevant information for each team. We also began sending a weekly email newsletter to the engineering and project organizations highlighting notable changes to our coverage reports.

To be continued…

Measuring code coverage is just one of many methods the Test Engineering Strategy Team is using to improve the culture of quality in Etsy Engineering. In a future blog post we will be discussing the other ways in which we measure our code base and how we use this reporting to gain confidence in the code that we ship.

5 Comments

Selecting a Cloud Provider

Posted by on January 4, 2018 / 13 Comments

Etsy.com and most of our related services have been hosted in self-managed data centers since the first Etsy site was launched in 2005. Earlier this year, we decided to evaluate migrating everything to a cloud hosting solution. The decision to run our own hardware in data centers was the right decision at the time, but infrastructure as a service (IaaS) and platform as a service (PaaS) offerings have changed dramatically in the intervening years. It was time to reevaluate our decisions. We recently announced that we have selected Google Cloud Platform (GCP) as our cloud provider and are incredibly excited about this decision. This marks a shift for Etsy from infrastructure self-reliance to a best-in-class service provider. This shift allows us to spend less time maintaining our own infrastructure and more time on strategic features and services that make the Etsy marketplace great.

Although we use the term ‘vendor’ when referring to a cloud provider, we viewed this as much more than a simple vendor selection process. We are entering into a partnership and a long-term relationship. The provider that we have chosen is going to be a major part of our successful initial migration, as well as a partner in the long-term scalability and availability of our site and services. This was not a decision that we wanted to enter into without careful consideration and deliberate analysis. This article will walk you through the process by which we vetted and ultimately selected a partner. We are not going to cover why we chose to migrate to a cloud hosting provider nor are we going to cover the business goals that we have established to measure the success of this project.

From One, Many

While the migration to a cloud hosting provider can be thought of as a single project, it really is one very large project made up of many smaller projects. In order to properly evaluate each cloud provider accurately, we needed to identify all of the sub-projects, determine the specific requirements of each sub-project, and then use these requirements to evaluate the various providers. Also, to scope the entire project, we needed to determine the sequence, effort, dependencies, priority, and timing of each sub-project.

We started by identifying eight major projects, including the production render path for the site, the site’s search services, the production support systems such as logging, and the Tier 1 business systems like Jira. We then divided these projects further into their component projects—MySQL and Memcached as part of the production render path, for example. By the end of this exercise, we had identified over 30 sub-projects. To determine the requirements for each of these sub-projects, we needed to gather expertise from around the organization. No one person or project team could accurately, or in a timely enough manner, gather all of these requirements. For example, we not only needed to know the latency tolerance of our MySQL databases but also our data warehousing requirement for an API to create and delete data. To help gather all of these requirements, we used a RACI model to identify subproject ownership.

RACI

The RACI model is used to identify the responsible, accountable, consulted, and informed people for each sub-project. We used the following definitions:

Each Responsible person owned the gathering of requirements and the mapping of dependencies for their sub-project. The accountable person ensured the responsible person had the time, resources, and information they needed to complete the project and ultimately signed off that it was done.

Architectural Review

Etsy has long used an architectural review process, whereby any significant change in our environment, whether a technology, architecture, or design, undergoes a peer review. As these require a significant contribution of time from senior engineers, the preparation for these is not taken lightly. Experts across the organization collaborated to solicit diverse viewpoints and frequently produced 30+ page documents for architectural review.

We determined that properly evaluating various cloud providers required an understanding of the desired end state of various parts of our system. For example, our current provisioning is done in our colocation centers using a custom toolset as automation to build bare-metal servers and virtual machines (VMs). We also use Chef roles and recipes applied on top of provisioned bare-metal servers and VMs. We identified a few key goals for choosing a tool for infrastructure creation in the cloud including: greater flexibility, accountability, security, and centralized access control. Our provisioning team evaluated several tools against these goals, discussed them, and then proposed new workflows in an architecture review. The team concluded by proposing we use Terraform in combination with Packer to build the base OS images.

Ultimately, we held 25 architectural reviews for major components of our system and environments. We also held eight additional workshops for certain components we felt required more in-depth review. In particular, we reviewed the backend systems involved in generating pages of etsy.com (a.k.a. the production render path) with a greater focus on latency constraints and failure modes. These architectural reviews and workshops resulted in a set of requirements that we could use to evaluate the different cloud providers.

How it Fits Together

Once we had a firm-ish set of requirements for the major components of the system, we began outlining the order of migration. In order to do this we needed to determine how the components were all interrelated. This required us to graph dependencies, which involved teams of engineers huddled around whiteboards discussing how systems and subsystems interact.

The dependency graphing exercises helped us identify and document all of the supporting parts of each major component, such as the scheduling and monitoring tools, caching pools, and streaming services. These sessions ultimately resulting in high-level estimates of project effort and timing, mapped in Gantt-style project plans.

Experimentation

Earlier in the year, we ran some of our Hadoop jobs on one of the cloud providers’ services, which gave us a very good understanding of the effort required to migrate and the challenges that we would face in doing so at scale. For this initial experiment, however, we did not use GCP, so we didn’t have the same level of understanding of the cloud provider we ultimately chose.

We therefore undertook an experiment to enable batch jobs to run on GCP utilizing Dataproc and Dataflow. We learned a number of lessons from this exercise including that some of the GCP services were still in alpha release and not suitable for the workloads and SLAs that we required. This was the first of many similar decisions we’ll need to make: use a cloud service or build our own tool. In this case, we opted to implement Airflow on GCP VMs. To help us make these decisions we evaluated the priority of various criteria, such as vendor support of the service, vendor independence, and impact of this decision on other teams.

There is no right answer to these questions. We believe that these questions and criteria need to be identified for each team to consider them and make the best decision possible. We are also not averse to revisiting these decisions in the future when we have more information or alpha and beta projects roll out into general availability (GA).

Meetings

Over the course of five months, we met with the Google team multiple times. Each of these meetings had clearly defined goals, ranging from short general introductions of team members to full day deep dives on various topics such as using container services in the cloud. In addition to providing key information, these meetings also reinforced a shared engineering culture between Etsy and Google.

We also met with reference customers to discuss what they learned migrating to the cloud and what to look out for on our journey. The time spent with these companies was unbelievably worthwhile and demonstrated the amazing open culture that is pervasive among technology companies in NY.

We also met with key stakeholders within Etsy to keep them informed of our progress and to invite their input on key directional decisions such as how to balance the mitigation of financial risk with the time-discounted value of cost commitments. Doing so provided decision makers with a shared familiarity and comfort with the process and eliminated the possibility of surprise at the final decision point.

The Decision

By this point we had thousands of data points from stakeholders, vendors, and engineering teams. We leveraged a tool called a decision matrix that is used to evaluate multiple-criteria decision problems. This tool helped organize and prioritize this data into something that could be used to impartially evaluate each vendor’s offering and proposal. Our decision matrix contained over 200 factors prioritized by 1,400 weights and evaluated more than 400 scores.

This process began with identifying the overarching functional requirements. We identified relationship, cost, ease of use, value-added services, security, locations, and connectivity as the seven top-level functional requirement areas. We then listed every one of the 200+ customer requirements (customers referring to engineering teams and stakeholders) and weighted them by how well each supported the overall functional requirements. We used a 0, 1, 3, 9 scale to indicate the level of support. For example, the customer requirement of “autoscaling support” was weighted as a 9 for cost (as it would help reduce cost by dynamically scaling our compute clusters up and down), a 9 for ease of use (as it would keep us from having to manually spin up and down VMs), a 3 for value-added services (as it is an additional service offered beyond just basic compute and storage but it isn’t terribly sophisticated), and a 0 for supporting other functional requirements. Multiple engineering teams performed an in-depth evaluation and weighting of these factors. Clear priorities began to emerge as a result of the nonlinear weighting scale, which forces overly conservative scorers to make tough decisions on what really matters.  

We then used these weighted requirements to rank each vendor’s offering. We again used a 0, 1, 3, 9 scale for how well each cloud vendor met the requirement. Continuing with our “autoscaling support” example, we scored each cloud vendor a 9 for meeting this requirement completely in that all the vendors we evaluated provided support for autoscaling compute resources as native functionality. The total scores for each vendor reached over 50,000 points with GCP exceeding the others by more than 10%.

As is likely obvious from the context, it should be noted that Etsy’s decision matrix (filled with Etsy’s requirements, ranked with Etsy’s weights by Etsy’s engineers) is applicable only to Etsy. We are not endorsing this decision as right for you or your organization, but rather we are attempting to provide you with insight into how we approach decisions such as these that are strategic and important to us.

Just the beginning

Now, the fun and work begin. This process took us the better part of five months with a full-time technical project manager, dozens of engineers and engineering managers, as well as several lawyers, finance personnel, and sourcing experts working on this part or in some cases full time. Needless to say, this was not an insignificant effort but really just the beginning of a multi-year project to migrate to GCP. We have an aggressive timeline to achieve our migration in about two years. We will do this while continuing to focus on innovative product features and minimizing risk during the transition period. We look forward to the opportunities that moving to GCP provides us and are particularly excited about this transformational change allowing us to focus more on core, strategic services for the Etsy marketplace by partnering with a best-in-class service provider.

13 Comments

VIPER on iOS at Etsy

Posted by on December 11, 2017 / 3 Comments

Background

Consistent design patterns help Etsy move fast and deliver better features to our sellers and buyers. Typically we built features in a variety of ways, but found great benefits from the consistency VIPER brings.

Inconsistent development practices can make jumping between projects and features fairly difficult. When starting a new project, developers would have to understand the way a feature was implemented with very little shared base knowledge.

Just before we began working on the new Shop Settings in the Sell on Etsy app, VIPER was gaining steam amongst the iOS community and given the complexity of Shop Settings we decided to give this new pattern a try.

While the learning curve was steep and the overhead was significant (more on that later) we appreciated the flexibility it had over other architectures allowing it to be more consistently used throughout our codebase. We decided to to apply the pattern elsewhere in our apps while investigating solutions to some of VIPER’s drawbacks.

What is VIPER?

VIPER is a Clean Architecture that aims to separate out the concerns of a complicated view controller into a set of objects with distinct responsibilities.

VIPER stands for View, Interactor, Presenter, Entity, and Router, each with its own responsibility:

View: What is displayed

Interactor: Handles the interactions for each use case.

Presenter: Sets up the logic for what is displayed in the view

Entity: Model objects that are consumed by the Interactor

Router: Handles the navigation/data passing from one screen to the next

Example Implementation

As a contrived example let’s say we want a view that displays a list of people. Our VIPER implementation could be setup as follows

View: A simple UITableView set up to display UITableViewCells

Interactor: The logic for fetching Person objects

Presenter: Takes a list of Person objects and returns a list of only the names for consumption by the view

Entity: Person objects

Router: Presents a detail screen about a given Person

The interaction would look something like this:

  1. The main view controller would tell the Interactor to fetch the Person entities
  2. The Interactor would fetch the Person entities and tell the Presenter to present these entities
  3. The Presenter would consume these entities and generate a list of strings
  4. The Presenter would then tell the UITableView to reload with the given set of strings

Additionally if the user were to tap on a given Person object the Interactor would call to the Router to present the detail screen.

Overhead & the issues with VIPER

VIPER is not without issues. As we began to develop new features we ran into a few stumbling blocks.

The boilerplate for setting up the interactions between all the classes can be quite tedious:


// The Router needs a reference to the interactor for 
// any possible delegation that needs to be set up.
// A good example would be after a save on an edit 
// screen the interactor should be told to reload the data.
router.interactor = interactor

// This is needed for the presentation logic
router.controller = navigationController

// If the view has any UI interaction the presenter 
// should be responsible for passing this to the interactor
presenter.interactor = interactor

// Some actions that the Interactor may do may require 
// presentation/navigation logic
interactor.router = router

// The interactor will need to update the presenter when 
// a UI update is needed
interactor.presenter = presenter

As a result of all the classes involved and the interaction between them developing a mental model of how VIPER works is difficult. Also, the numerous references required between classes can easily lead to retain cycles if weak references are not used in the appropriate places.

VIPERBuilder and beyond

In an attempt to reduce the amount of boilerplate and lower the barrier for using VIPER we developed VIPERBuilder.

VIPERBuilder consists of a simple set of base classes for the Interactor, Presenter and Router (IPR) as well as a Builder class.

By creating an instance of the VIPERBuilder object with your IPR subclasses specified via Swift generics (as needed) the necessary connections are made to allow for consistent VIPER implementation.

Example within a UIViewController:


lazy var viperBuilder: VIPERBuilder<NewInteractor, NewPresenter, NewRouter> = {
    return VIPERBuilder(controller: self)
}()

The VIPERBuilder classes intended to be subclassed are written in Objective C to allow subclassing in both Swift & Objective C. There is also an alternative builder object (VIPERBuilderObjC) that uses Objective C generics.

Since the boilerplate is fairly limited the only thing left to do is write your functionality and view code. The part of VIPERBuilder that we chose not to architect around is what code should actually go in the IPR subclasses. Instead we broke down our ideal use-cases for IPR subclasses into the Readme.

Additionally, we have an example project in the repo to give a better idea of how the separation of concerns work with VIPERBuilder.

The iOS team at Etsy has built many features with VIPER & VIPERBuilder over the past year: shop stats, multi-shop checkout, search and the new seller dashboard just to name a few. Following these guidelines in our codebase has lessened the impact of context switching between features implemented in VIPER. We are super excited to see how the community uses VIPERBuilder and how it changes over time.

3 Comments