On Prem - Acceleration 2.0

Game-changing Economics

The benefits of programmable acceleration hardware unite with caching technology to lower the total cost of ownership of scaling Spark clusters by 80% or more.

July 29 Webinar

Spark scaling at 89% lower TCO

“Infinite horizontal scaling” has been the default expansion paradigm for Spark and Hadoop. But new innovation has opened up far more efficient and cost-effective approaches to gaining performance.

This example shows two cluster expansion that both result in 25% performance gains.

  • Example 1 is the traditional path, adding 25% more servers and upfront hardware and ongoing costs.
  • Example 2 is Bigstream Acceleration 2.0, including a minimal hardware impact and 89% lower total cost of ownership.

How it Works

Acceleration 2.0 leverages Acceleration, Caching, and the synergies between them.

  • Acceleration – superior performance of FPGAs runs computational operations
  • Intelligently caches hot data in a fast SSD, removing I/O bottlenecks
  • Leverages synergies between acceleration & caching to deliver superior performance at low cost

Easy POC and Deployment

POC

  • Benchmark the performance with an easy methodology that can be done in a single day.
  • Cloud testing is also available.

Seamless deployment

  • Single U.2 card installed per server. One-time embedded software license
  • No changes to user code
  • No changes to HDFS
  • Works with your Spark version

Deliver Results

Fast Insights

Jobs run up to 10X faster without changing your Apache Spark code, increasing data science productivity and the ability to meet tight SLAs

Enriched Analytics

Incorporate fuller data sets and more data sources by eliminating the bottlenecks forcing you to make compromises

Lower TCO

Instead of scale-out and scale-up, optimize your existing Spark environment with Bigstream, lowering hardware and operating expenses