Processing over 1 trillion events per day, Twitter is one of the largest Hadoop* users in the world—typical clusters contain over 100,000 HDDs, half a million compute threads, and an exabyte of physical storage.
But there was scaling problem. The company’s configuration was reaching an I/O performance limit that could not be solved by simply adding more and bigger HDDs due to space and power limitations.
Join Milind Damle, Senior Director of Intel Big Data Technologies, to find out how Twitter got a new handle on this ocean of data, including how they:
- Reduced runtimes by up to 50% on existing hardware
- Removed a storage I/O bottleneck that enabled them to increase processor utilization
- Achieved higher data center density by reducing the number of required HDDs
- Increased total cost of ownership (TCO) savings by a projected 30%
Get the software
Intel® VTune™ Amplifier Platform Profiler—This feature is included in the standalone Intel® VTune™ Amplifier tool. Free.
More resources