Twitter search engineers have rewritten its search engine in order to serve our ever-growing traffic, improve the end-user latency and availability of theservice, and enable rapid development of new search features.
According to the Twitter Engineering blog, the change has reduced search latencies by approximately three times (from 800ms to 250ms), Twitter claims, and CPU load on Twitter’s front-end servers was cut in half.
The speed boost is thanks to changing the back-end from MySQL to a real-time version of Lucene. Last week, Twitter launched a replacement for itsRuby-on-Rails front-end: a Java server it calls Blender. Twitter says this change has produced a 3x drop in search latencies and will enable the company to rapidly iterate on search features in the coming months.
The upside of the new architecture for Twitter is fewer servers and fewer costs. For us, the end users, Twitter’s search engine, which serves over one billion queries per day, will work much faster, with the dreaded fail whale making fewer appearances.
Twitter gives an example of the performance gains. The week before it deployed Blender, the #tsunami in Japan contributed to a significant increase in query load and a related spike in search latencies. Following the launch of Blender, Twitter’s 95th percentile latencies were reduced by 3x from 800ms to 250ms and CPU load on its front-end servers was cut in half.
We it has the capacity to serve 10x the number of requests per machine. This means Twitter can support the same number of requests with fewer servers, reducing our front-end service costs by an order-of-magnitude.