Experimenting with HHVM at Etsy

Posted by on April 6, 2015

In 2014 Etsy’s infrastructure group took on a big challenge: scale Etsy’s API traffic capacity 20X. We launched many efforts simultaneously to meet the challenge, including a migration to HHVM after it showed a promising increase in throughput. Getting our code to run on HHVM was relatively easy, but we encountered many surprises as we gained confidence in the new architecture.

What is HHVM?

Etsy Engineering loves performance, so when Facebook announced the availability of the HipHop Virtual Machine for PHP, its reported leap in performance over current PHP implementations got us really excited.

HipHop Virtual Machine (HHVM) is an open-source virtual machine designed for executing programs written in PHP. HHVM uses a just-in-time (JIT) compilation approach to achieve superior performance while maintaining the development flexibility that PHP provides.

This post focuses on why we became interested in HHVM, how we gained confidence in it as a platform, the problems we encountered and the additional tools that HHVM provides. For more details on HHVM, including information on the JIT compiler, watch Sara Golemon and Paul Tarjan’s presentation from OSCON 2014.

Why HHVM?

In 2014 engineers at Etsy noticed two major problems with how we were building mobile products. First, we found ourselves having to rewrite logic that was designed for being executed in a web context to be executed in an API context. This led to feature drift between the mobile and web platforms as the amount of shared code decreased.

The second problem was how tempting it became for engineers to build lots of general API endpoints that could be called from many different mobile views. If you use too many of these endpoints to generate a single view on mobile you end up degrading that view’s performance. Ilya Grigorik’s “Breaking the 1000ms Time to Glass Mobile Barrier” presentation explains the pitfalls of this approach for mobile devices. To improve performance on mobile, we decided to create API endpoints that were custom to their view. Making one large API request is much more efficient than making many smaller requests. This efficiency cost us some reusability, though. Endpoints designed for Android listing views may not have all the data needed for a new design in iOS. The two platforms necessitate different designs in order to create a product that feels native to the platform. We needed to reconcile performance and reusability.

To do this, we developed “bespoke endpoints”. Bespoke endpoints aggregate smaller, reusable, cacheable REST endpoints. One request from the client triggers many requests on the server side for the reusable components. Each bespoke endpoint is specific to a view.

Consider this example listing view. The client makes one single request to a bespoke endpoint. That bespoke endpoint then makes many requests on behalf of the client. It aggregates the smaller REST endpoints and returns all of the data in one response to the client.

Bespoke Endpoint

Bespoke endpoints don’t just fetch data on behalf of the client, they can also do it concurrently. In the example above, the bespoke endpoint for the web view of a listing will fetch the listing, its overview, and the related listings simultaneously. It can do this thanks to curl_multi. Matt Graham’s talk “Concurrent PHP in the Etsy API” from phpDay 2014 goes into more detail on how we use curl_multi. In a future post we’ll share more details about bespoke endpoints and how they’ve changed both our native app and web development.

This method of building views became popular internally. Unfortunately, it also came with some drawbacks.

API traffic growth compared to Web traffic growth

Now that web pages had the potential to hit dozens of API endpoints, traffic on our API cluster grew more quickly than we anticipated. But that wasn’t the only problem.

Bootstrap Time Visualized

This graph represents all the concurrent requests that take place when loading the Etsy homepage. Between the red bars is work that is duplicated across all of the fanned out requests. This duplicate work is necessary because of the shared-nothing process architecture of PHP. For every request, we need to build the world: fetch the signed-in user, their settings, sanitize globals and so on. Although much of this duplicated work is carried out in parallel, the fan-out model still causes unnecessary work for our API cluster. But it does improve the observed response time for the user.

After considering many potential solutions to this problem, we concluded that trying to share state between processes in a shared-nothing architecture would inevitably end in tears. Instead, we decided to try speeding up all of our requests significantly, including the duplicated bootstrap work. HHVM seemed well-suited to the task. If this worked, we’d increase throughput on our API cluster and be able to scale much more efficiently.

Following months of iterations, improvements and bug fixes, HHVM now serves all of the fan-out requests for our bespoke endpoints. We used a variety of experiments to gain confidence in HHVM and to discover any bugs prior to deploying it in production.

The Experiments

Minimum Viable Product

The first experiment was simple: how many lines of PHP code do we have to comment out before HHVM will execute an Etsy API endpoint? The results surprised us. We only encountered one language incompatibility. All of the other problems we ran into were with HHVM extensions. There were several incompatibilities with the HHVM memcached extension, all of which we have since submitted pull requests for.

Does it solve our problem?

We then installed both PHP 5.4 and HHVM on a physical server and ran a synthetic benchmark. This benchmark randomly splayed requests across three API endpoints that were verified to work in HHVM, beginning at a rate of 10 requests per second and ramping up to 280 requests per second. The throughput results were promising.

The little green line at the bottom is HHVM response time

The little green line at the bottom is HHVM response time

Our PHP 5.4 configuration began to experience degraded performance at about 190 requests per second, while the same didn’t happen to HHVM until about 270 requests per second. This validated our assumption that HHVM could lead to higher throughput which would go a long way towards alleviating the load we had placed on our API cluster.

Gaining Confidence

So far we had validated that HHVM could run the Etsy API (at least with a certain amount of work) and that doing so would likely lead to increase in throughput. Now we had to become confident that HHVM could run etsy.com correctly. We wanted to verify that responses returned from HHVM were identical to those returned by PHP. In addition our API’s full automated test suite and good old-fashioned manual testing we also turned to another technique: teeing traffic.

You can think of “tee” in this sense like tee on the command line. We wrote an iRule on our f5 load balancer to clone HTTP traffic destined for one pool and send it to another. This allowed us to take production traffic that was being sent to our API cluster and also send it onto our experimental HHVM cluster, as well as an isolated PHP cluster for comparison.

This proved to be a powerful tool. It allowed us to compare performance between two different configurations on the exact same traffic profile.

Note that this is on powerful hardware.

140 rps peak. Note that this is on powerful hardware.

On the same traffic profile HHVM required about half as much CPU as PHP did. While this wasn’t the reduction seen by the HHVM team, who claimed a third as much CPU should be expected, we were happy with it. Different applications will perform differently on HHVM. We suspect the reason we didn’t see a bigger win is that our internal API was designed to be as lightweight as possible. Internal API endpoints are primarily responsible for fetching data, and as a result tend to be more IO bound than others. HHVM optimizes CPU time, not IO time.

While teeing boosted our confidence in HHVM there were a couple hacks we had to put in place to get it to work. We didn’t want teed HTTP requests generating writes in our backend services. To that end we wrote read-only mysql, memcached and redis interfaces to prevent writes. As a result however, we weren’t yet confident that HHVM would write data correctly, or write the correct data. 

Employee Only Traffic

In order to gain confidence in that area we configured our bespoke endpoints to send all requests to the HHVM cluster if the user requesting the page was an employee. This put almost no load on the cluster, but allowed us to ensure that HHVM could communicate with backend services correctly. 

At this point we encountered some more incompatibilities with the memcached extension. We noticed that our API rate limiter was never able to find keys to decrement. This was caused by the decrement function being implemented incorrectly in the HHVM extension. In the process of debugging this we noticed that memcached was always returning false for every request HHVM made to it. This turned out to be a bug in the client-side hashing function present in HHVM. What we learned from this is that while the HHVM runtime is rock-solid, a lot of the included extensions aren’t. Facebook thoughtfully wrote a lot of the extensions specifically for the open source release of HHVM. However, many of them are not used internally because Facebook has their own clients for memcached and MySQL, and as a result have not seen nearly as much production traffic as the rest of the runtime. This is important to keep in mind when working with HHVM. We expect this situation will improve as more and more teams test it out and contribute patches back to the project, as we at Etsy will continue to do.

After resolving these issues it came time to slowly move production traffic from the PHP API cluster to the HHVM API cluster.

Slow Ramp Up

As we began the slow ramp in production we noticed some strange timestamps in the logs:

[23/janv./2015:22:40:32 +0000]

We even saw timestamps that looked like this:

[23/ 1月/2015:23:37:56]

At first we thought we had encountered a bug with HHVM’s logging system. As we investigated we realized the problem was more fundamental than that.

At Etsy we use the PHP function setlocale() to assist in localization. During a request, after we load a user we call setlocale() to set their locale preferences accordingly. The PHP function setlocale() is implemented using the system call setlocale(3). This system call is process-wide, affecting all the threads in a process. Most PHP SAPIs are implemented such that each request is handled by exactly one process, with many processes simultaneously handling many requests. 

HHVM is a threaded SAPI. HHVM runs as a single process with multiple threads where each thread is only handling exactly one request. When you call setlocale(3) in this context it affects the locale for all threads in that process. As a result, requests can come in and trample the locales set by other requests as illustrated in this animation.

locale overwriting

We have submitted a pull request re-implementing the PHP setlocale() function using thread-local equivalents. When migrating to HHVM it’s important to remember that HHVM is threaded, and different from most other SAPIs in common use. Do an audit of extensions you’re including and ensure that none of them cause side effects that could affect the state of other threads.

Release!

After rolling HHVM out to just the internal API cluster we saw a noticeable improvement in performance across several endpoints.

public_search_listings

HHVM vs PHP on Etsy Internal API

It’s Not Just Speed

In the process of experimenting with HHVM we discovered a few under-documented features that are useful when running large PHP deployments.

Warming up HHVM

The HHVM team recommends that you warm up your HHVM process before having it serve production traffic:

“The cache locality of the JITted code is very important, and you want all your important endpoints code to be located close to each other in memory. The best way to accomplish this is to pick your most important requests (say 5) and cycle through them all serially until you’ve done them all 12 times. “ 

They show this being accomplished with a simple bash script paired with curl. There is a more robust method in the form of “warmup documents”. 

You specify a warmup document in an HDF file like this:

cmd = 1
url = /var/etsy/current/bin/hhvm/warmup.php // script to execute
remote_host = 127.0.0.1
remote_port = 35100
headers { // headers to pass into HHVM
0 {
name = Accept
value = */*
}
1 {
name = Host
value = www.etsy.com
}
2 {
name = User-Agent
value = Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_2) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.46 Safari/535.11
}
}

To tell HHVM to execute that warmup document on startup, simply reference it like so:

Server {
WarmupRequests {
* = /var/etsy/current/bin/hhvm/warmup.hdf
}
}

This will execute /var/etsy/current/bin/hhvm/warmup.php between when the HHVM binary is executed and when the process accepts connections. It will only execute it once however, and HHVM will not JIT any code until after the twelfth request. To execute a warmup document 12 times simply reference it 12 times from the config file, like so:

Server {
WarmupRequests {
* = /var/etsy/current/bin/hhvm/warmup.hdf
* = /var/etsy/current/bin/hhvm/warmup.hdf
* = /var/etsy/current/bin/hhvm/warmup.hdf
* = /var/etsy/current/bin/hhvm/warmup.hdf
* = /var/etsy/current/bin/hhvm/warmup.hdf
* = /var/etsy/current/bin/hhvm/warmup.hdf
* = /var/etsy/current/bin/hhvm/warmup.hdf
* = /var/etsy/current/bin/hhvm/warmup.hdf
* = /var/etsy/current/bin/hhvm/warmup.hdf
* = /var/etsy/current/bin/hhvm/warmup.hdf
* = /var/etsy/current/bin/hhvm/warmup.hdf
* = /var/etsy/current/bin/hhvm/warmup.hdf
}
}

Profiling HHVM with perf(1)

HHVM makes it really easy to profile PHP code. One of the most interesting ways is with Linux’s perf tool.

HHVM is a JIT that converts PHP code into machine instructions. Because these instructions, or symbols, are not in the HHVM binary itself, perf cannot automatically translate these symbols into functions names. HHVM creates an interface to aid in this translation. It takes the form of a file in /tmp/ named according to this template:

/tmp/perf-<pid of process>.map 

The first column in the file is the address of the start of that function in memory. The second column is the length of the function in memory. And the third column is the function to print in perf.

Perf looks up processes it has recorded by their pid in /tmp to find and load these files. (The pid map file needs to be owned by the user running perf report, regardless of the permissions set on the file.) 

If you run

sudo perf record -p <pid> -ag -e instructions -o /tmp/perf.data -- sleep 20

perf will record all of the symbols being executed for the given pid and the amount of CPU time that symbol was responsible for on the CPU over a period of 20 seconds. It stores that data in /tmp/perf.data.

Once you have gathered data from perf with a command such as the above, you can display that data interactively in the terminal using `perf report`.

perf report

Click to embiggen

This show us a list of the most expensive functions (in terms of instructions executed on the CPU) being executed. Functions prefixed with HPHP:: are functions built into the language runtime. For example HPHP::f_sort accounts for all calls the PHP code makes to sort(). Functions prefixed with PHP:: are programmer-defined PHP functions. Here we can see that 36% of all CPU time occurred in Api_Handler::respond(), for example. Using perf() to profile PHP code is powerful on its own, but having the ability to jump from a PHP function into an HPHP function allows you to see what parts of your codebase HHVM doesn’t handle efficiently. Using this process we were able to determine that sort() calls were slow when enable_zend_sorting was enabled. After patching it to be more efficient, we realized a significant CPU and performance win:

CPU drop

Median perf drop

This change resulted in an additional increase in throughput across our API cluster as well as improved response times.

HHVM Interactive Debugger

HHVM provides an interactive debugger called “hphpd”. hphpd works similarly to gdb: it is a command line based interactive debugger. 

$ hhvm -c /etc/php.d/etsy.ini -m debug bin/test.php
Welcome to HipHop Debugger!
Type "help" or "?" for a complete list of commands.
 
Program bin/test.php loaded. Type '[r]un' or 'ontinue' to go.

Here we set a breakpoint on a function:

hphpd> b Shutdown::registerApiFunctions()
Breakpoint 1 set upon entering Shutdown::registerApiFunctions()
But wont break until class Shutdown has been loaded.
Commence execution until we encounter a breakpoint:
hphpd> continue
Breakpoint 1 reached at Shutdown::registerApiFunctions() on line 101 of /home/dmiller/development/Etsyweb/phplib/Shutdown.php
100     public static function registerApiFunctions() {
101*        self::registerFunction(['Shutdown', 'apiShutdown']);
102     }

Step into that function:

hphpd> step
Break at Shutdown::registerFunction() on line 74 of /home/dmiller/development/Etsyweb/phplib/Shutdown.php
73     public static function registerFunction() {
74*        $callback = func_get_args();
75

Step over that function:

hphpd> next
Break at Shutdown::registerFunction() on line 76 of /home/dmiller/development/Etsyweb/phplib/Shutdown.php
75
76*        if (empty($callback)) {
77             $bt = new Dev_Backtrace();
 
hphpd> next
Break at Shutdown::registerFunction() on line 82 of /home/dmiller/development/Etsyweb/phplib/Shutdown.php
81         }
82*        if (!is_callable($callback[0])) {
83             $bt = new Dev_Backtrace();

After adding a few lines to your configuration file you can use this debugger on any code that executes in HHVM.

Lessons Learned from the Experiment

The process of migrating our API cluster to HHVM taught us a lot about HHVM as well as how to better perform such migrations in the future. The ability to clone HTTP traffic and tee it to a read-only test cluster allowed us to gain confidence in HHVM much more quickly than we could have otherwise. While HHVM proved to be rock-solid as a language runtime, extensions proved to be less battle-tested. We frequently encountered bugs and missing features in the MySQL, Memcached and OAuth extensions, among others. Finally it’s important to remember that HHVM is threaded, which can result in a weird interplay between the runtime and system calls. The resulting behavior can be very surprising.

HHVM met our expectations. We were able to realize a greater throughput on our API cluster, as well as improved performance. Buying fewer servers also means less waste and less power consumption in our data centers, which is important to Etsy as a Benefit Corporation.

 

You can follow Dan on Twitter @jazzdan.

Special thanks Sara Golemon, Paul Tarjan and Josh Watzman at Facebook. Extra special thanks to Keyur Govande and Adam Saponara at Etsy.

Posted by on April 6, 2015
Category: Uncategorized

41 Comments

[…] glad Etsy found this before the rest of us had […]

[…] on procuring extra data center space, which is even more costly than new servers, said Miller, who described Etsy’s use of HHVM in greater detail in a blog post […]

thank you!

We moved from mod_php to the JVM a few years ago, and as much as JIT eventually makes for a big performance improvement it also comes with a bunch of operational gotchas. When HHVM started getting steam I really wanted to see someone address them (warmup) so that I didn’t have to learn it all the hard way a second time. I had tried poking around a few times but hadn’t found anything that addresses warmup concerns directly like this.

Its probably early but when you get to the eventual memory ‘leak’, allocation rate, garbage collection and thread contention hurdles please share.

    Hey Jim,

    HHVM does have a slight memory leak problem. The HHVM translation cache stores any code that is JIT’d in memory. If a given function is changed due to a deploy and re-JIT’d HHVM will not evict the old version from memory. This means that HHVM’s memory usage will slowly grow over time. This is not a problem if you run HHVM in RepoAuthoritative mode like Facebook does.

    To get around this we simple do a rolling restart across our HHVM cluster every night. We need to do this frequently because we deploy up to 60 times every day.

    We track allocation rate by graphing the jemalloc stats reported by the HHVM admin server. So far that hasn’t been a problem. As we’re not using any of Hack’s async functionality yet, we haven’t had any issues with thread contention either.

    I’ll be sure to write more if we encounter other interesting problems!

Great article. Thanks for sharing the details on the problems you guys ran into. Can you please share specifics on the timeline and the resources it took from the inception to getting HHVM to production. And also the total number of bugs and the pull requests you guys had to submit.

    Hi Prakash,

    We started experimenting with HHVM in September of 2014 and we rolled out to 100% of our internal API in February of 2015.

    We’ve submitted 10 pull requests to HHVM and counting!

[…] Experimenting with HHVM at Etsy – Code as Craft […]

Thanks for writing this up, Dan!

So looking at the graphs then moving to HHVM resulted in roughly doubling the capacity of the Etsy servers (because RPS roughly halved?)?

How many classes / lines of code does Etsy have? And how long did it take in man months to port them and get everything working on HHVM (i.e. the “months of iterations”)?

Did you consider using technology like [1], [2], or [3] instead of HHVM? And if so, why did you rule it out?

[1] https://github.com/Automattic/prefork
[2] http://reactphp.org/
[3] http://www.php-cpp.com/

    Hey Simon,

    Just a bit over a 2x increase in throughput, that’s right.

    Etsy has about 2.5 million lines of PHP code across about 14,000 files. It took a team of three people about 5 months to complete this project. Most of that was not spend resolving incompatibilities, but gaining confidence in the system using the traffic tee and the other strategies discussed in the blog post.

    I can’t say we looked closely at prefork or reactphp at all. Taking hot code paths and porting them to a PHP extension is a common strategy. We didn’t want to take that approach because we felt that it would negatively impact the productivity of our developers who are used to writing in PHP. Suddenly having to switch from interpreted PHP to compiled C++ is a bit of a pain, but that cost is overshadowed by the deployment cost. Deploying a PHP extension 60 times[1] a day would require a lot of tooling that we haven’t build yet. HHVM allowed us some of the benefits of rewriting hot code paths in C++ extensions without complicating our deployment story.

    [1] http://www.infoq.com/news/2014/03/etsy-deploy-50-times-a-day

      Thanks for the info.

      I hear your concerns about the deployment cost of extensions in general. However, you may be interested to learn that PHP-CPP very recently introduced a dynamic extension loading feature [1]. This makes it easier to deploy extensions (PHP-CPP extensions, not PHP extensions) on-the-fly without even restarting Apache or updating the PHP config file(s). It allows you to tie a particular extension version to a particular PHP code version, thus being able to have many versions of the same extension, and being able to choose which to load at request-time.

      And regarding the overhead of compiling PHP-CPP: I’m imagining a solution where the compilation happens transparently on-the-fly… so there should be no change in either workflow or deployment as far as the developer is concerned… we’ll see if this pans out…

      Disclosure: I have not used PHP-CPP in production yet and am currently evaluating it.

      [1] http://www.php-cpp.com/documentation/dynamic-loading

[…] Experimenting with HHVM at Etsy – Code as Craft – […]

Thanks for sharing the results. Just one question, you mention PHP 5.4 in your tests and in the graph above “It’s Not Just Speed” you’ve PHP 5.5 written. Which PHP version were you comparing against?
Have you tried with 5.5 / 5.6 and opcache enabled?

Thanks.

    Hi Guillermo,

    Between September, when we started this project, and February we upgraded from PHP 5.4 to PHP 5.5. That’s why there is a discrepancy between the earlier graphs and the later ones. All of our PHP benchmarks in this blog post were run with opcache enabled, but we have not run PHP 5.6 in production yet.

Thanks for the write-up! I love these kind of articles that give a bit of insight into how other people do their thing, especially when it’s one of the bigger players.

I’ve been hoping to get a hands-on with HHVM since I first heard of it. These kind of stories are encouraging. It sounds like you have a great dev team at Etsy. Thanks for you guys’ work. This is the beauty of open source right here.

How did you generate all those graphs?

Cheers

[…] a new post the Etsy team have written about their switch to HHVM for the Etsy API, a great […]

[…] went through some benchmark tests that Etsy has run pitting the standard PHP engine against HHVM in a recent blog, and this experiment has actually gone into production. Those API servers are a key element of […]

[…] a promising increase in throughput. There were many surprises and interesting details, so read the full article by Dan Miller for […]

great post!

Amazing, thanks for sharing

Very interesting, thanks for the post.

Have you done any analysis or research into maintaining a http connection pool on your “bespoke endpoints” and sending keep-alive headers from your APIs? I realise PHP doesn’t support this natively.

Thanks,
Si

    Hey Simon,

    Thanks! We don’t connection pool as we don’t want connections persisting between request. Doing so just leaves us with more global state to maintain, which is something we try to avoid in PHP.

    We experimented with having the F5 load balancer maintain connections between the bespoke caller, the F5 and the HHVM API cluster but saw little performance difference.

[…] Extremely interesting blog post of Etsy on how they try out HHVM / HipHop: https://codeascraft.com/2015/04/06/experimenting-with-hhvm-at-etsy/ […]

[…] Experimenting with HHVM at Etsy. Code as Craft […]

Hi Dan, Thank you for the post, we are now thinking on migrating from HPHPc to HHVM and I find this information invaluable.

Just a quick question, could you share with us some information about the memory consumption? With HPHPc, the memory used is quite low and we are worried about that when migration to HHVM.

Thanks,
Daniel

    Hi Daniel,

    HHVM uses about 1/3 as much memory on our API cluster as PHP 5.5 did. It’s my understanding that the memory usage should be comparable to HPHPc.

Reading the explanation of the initial performance problem, it seems this is largely a consequence of using the REST endpoint as the unit of code decomposition. Essentially, you’ve rediscovered the age-old pitfalls of fine-grained RPC.

Glad to hear that HHVM helped with this, but it’d be interesting to hear why you rejected the alternative – using in-process libraries as the unit of decomposition, i.e. replacing RPC with LPC (local procedure calls). That code is sitting there anyway, behind the various endpoints. Why not just deploy it to the node where it’s needed?

    Hi Jaime,

    Excellent point! In the long term we want to remain with a RPC-like system so that the bespoke endpoint doing the aggregation and the reusable component endpoints can be separated geographically. In the short term we did end up implementing a local API client like you described that can be used to power views that do large amounts of fanned out requests.

Hi Dan,

We are trying out the warmup docs approach and running into some problems. What is in your warmup doc? Is it curl calls to the API endpoints? We’ve tried that and kept getting connection refused, so we are assuming that HHVM is not listening to any ports during the warmup phase.

As far as we can see the warmup docs are a bit of a blank art.

[…] benchmark results show how PHP experienced exponential performance degradation from 190 requests per second onward. At the […]

Great to see that you are using HHVM. I was wondering if you are planning to use the language features of Hack or is it just for the performance improvements of HHVM?

    Hi Leon,

    We’re currently only using HHVM for the performance improvements it offers code written in PHP. We’re excited about the features that Hack offers, and we’re tracking it along with PHP 7 closely.

[…] With 54 million users, Etsy’s servers also face significant demands. Etsy engineers compared HHVM to PHP 5.4 and found that HHVM could cope with up to 280 server requests per second whereas the response time […]

it is always nice to see what big companies are doing to push PHP to limits, have you guys looked into PHP 7 the throughput is better than HHVM in some cases. We are currently in process of migrating our api’s to PHP 7.

    Hi ally,

    We haven’t been able to run a side-by-side comparison of PHP 7 and HHVM. We’re still waiting for some extensions to be ported over.

[…] projects saw additional uptake in 2015. WordPress and Netflix revamped their products with React. Etsy migrated to HHVM this year, and Box announced that our virtual machine would be the exclusive […]

[…] projects saw additional uptake in 2015. WordPress and Netflix revamped their products with React. Etsy migrated to HHVM this year, and Box announced that our virtual machine would be the exclusive […]

I didn’t know Etsy is using HHVM. Smart move.

[…] have documented their experimentation with HHVM in search for performance optimizations and even got Rasmus Lerdorf, the creator of PHP, giving a […]