irthomasthomas / undecidability

6 stars 2 forks source link

Announcing Together Inference Engine 2.0 with new Turbo and Lite endpoints #886

Open ShellLM opened 4 weeks ago

ShellLM commented 4 weeks ago

Announcing Together Inference Engine 2.0 with new Turbo and Lite endpoints

COMPANY

Announcing Together Inference Engine 2.0 with new Turbo and Lite endpoints JULY 18, 2024 • BY TOGETHER AI

Today we are announcing a new inference stack, which provides decoding throughput 4x faster than open-source vLLM, and outperforms commercial solutions including Amazon Bedrock, Azure AI, Fireworks, and Octo AI by 1.3x to 2.5x. The Together Inference Engine achieves over 400 tokens per second on Meta Llama 3 8B by building on the latest advances from Together AI, including FlashAttention-3, faster GEMM & MHA kernels, innovations in quality-preserving quantization, and speculative decoding.

We are also introducing new Together Turbo and Together Lite endpoints, starting today with Meta Llama 3, and expanding to other models soon. Together Turbo and Together Lite enable performance, quality, and price flexibility so enterprises do not have to compromise. They offer the most accurate quantization available on the market; with Together Turbo closely matching the quality of full-precision FP16 models. These advances make Together Inference the fastest engine for Nvidia GPUs, and the most accurate and cost efficient solution to build with Generative AI at production scale.

Today, over 100,000 developers and companies like Zomato, DuckDuckGo, and the Washington Post build and run their Generative AI applications on the Together Inference Engine.

"Our endeavour is to deliver exceptional customer experience at all times. Together AI has been our long standing partner and with Together Inference Engine 2.0 and Together Turbo models, we have been able to provide high quality, fast, and accurate support that our customers demand at tremendous scale." —Rinshul Chandra, COO, Food Delivery, Zomato

Today's release includes:

Together AI has been a leader in inference optimization research and is built on our own innovations, including FlashAttention-3 kernels, custom-built speculators based on RedPajama, and the most accurate quantization techniques available on the market.

Together Turbo endpoints

Together Turbo, our new flagship endpoints, provide the best combination of performance, quality, and cost-efficiency. Together Turbo endpoints leverage the most accurate quantization techniques and proprietary innovations to provide leading performance without compromising quality.

Quality

The below figures provide a comprehensive quality analysis of the Together Turbo endpoints on three leading quality benchmarks: HELM classic, WikiText/C4 and AlpacaEval 2.0. We compare Together Turbo models against our full precision reference implementation, as well as two commercial model providers, Fireworks and Groq, that provide quantized models by default.

We ran the official HELM standard configuration which includes 1,000 instances with 3 trials for 17 tasks. We evaluated using the entire WikiText test set and the first one million tokens of the C4 validation set. Using AlpacaEval 2.0, we ran all 805 prompts 3 times and averaged the results. Across these different quality evaluation perspectives, Together Turbo results closely match the Llama-3-8B full precision reference implementation.

Together Turbo also significantly outperforms other FP8 implementations, as shown in the following figures.

Performance

Together Turbo provides up to 4.5x performance improvement over vLLM (version 0.5.1) on Llama-3-8B-Instruct and Llama-3-70B-Instruct. Across common inference regimes with different batch sizes and context lengths, Together Turbo consistently outperforms vLLM. For Llama-3-8B-Instruct served on a single H100 GPU and Llama-3-70B-Instruct served on 8xH100 GPUs, Together Turbo achieves 2.8x-4.5x and 2.6x-4.3x decoding speedup over vLLM, respectively. Several factors contribute to these successful numbers:

Cost efficiency

Together Turbo endpoints are available through the Together API as serverless endpoints at more than 10x lower cost than GPT-4o, and provide significant efficiency benefits for customers hosting their own dedicated endpoints on the Together Cloud.

Together Turbo achieves up to 7x the capacity of vLLM (version 0.5.1), translating to up to a 7x reduction in costs.

Further, for dedicated endpoints customers switching from Together Reference to Together Turbo endpoints, on 4xH100 GPUs Together Turbo consistently achieves decoding throughput that is within 8% of Together Reference endpoints running on 8xH100 GPUs — a 2X reduction in costs.

Through innovative improvements throughout the stack Together Turbo provides leading performance, quality, and cost efficiency so you don't have to compromise. Try Llama-3-70B-Turbo or Llama-3-8B-Turbo through the Together API now, or contact us to deploy your own dedicated endpoint on Together Cloud.

Together Lite endpoints

Together Lite endpoints are designed for applications demanding fast performance and high capacity at the lowest cost.

AlpacaEval2.0 (win rate) numbers for Together Lite and Groq show that on LC win rate Together Lite endpoints are almost identical to Groq. And for win rate, Together Lite endpoints are slightly better than Groq (+0.29%).

Compared to vLLM, Together Lite provides a 12x reduction in cost. On a range of common inference serving regimes, with only two A100 GPUs, Together Lite outperforms vLLM FP16 and FP8 running on eight H100 GPUs by up to 30%. This directly translates to the highest cost reduction and capacity improvement for the large scale production deployments.

In summary, Together Lite endpoints provide a highly economical solution with a modest compromise in quality. Try Llama-3-70B-Lite or Llama-3-8B-Lite through the Together API now, or contact us to deploy your own dedicated endpoint on Together Cloud.

Together Reference endpoints

Quality has always been at the heart of the Together Inference solution. Together Reference endpoints provide full precision FP16 quality consistent with model providers' base implementations and reference architectures. Through continued innovation and optimization, these reference models are available with the fastest performance, in some cases even faster than quantized models.

Together Reference endpoints achieve 4x speedup over the state-of-the-art inference engine vLLM across normal serving regimes:

Further, based on third-party benchmarks from Artificial Analysis, Together Reference endpoints (FP16) perform with over 2x faster tokens per second than Amazon Bedrock, Microsoft Azure, or Octo AI; and over 30% faster than Fireworks FP8 models. This performance comes from a combination of an innovative base engine, more efficient memory management, and a low-overhead system architecture design.

This combination of full reference quality, with high performance, meets the needs of the most demanding enterprises. Together Reference endpoints are ideal for benchmarking, research, and applications demanding the utmost quality.

Together Inference Engine technical advances

The Together inference solution is a dynamic system that continuously incorporates cutting-edge innovations from both the community and our in-house research. These advancements span various areas, which include kernels (e.g., FlashAttention-3 and FlashDecoding), models and architectures (e.g., Mamba and StripedHyena), quality-preserving quantization, speculative decoding (e.g., Medusa, Sequoia, and SpecExec), and other innovative runtime and compiler techniques. To that end, Together Inference Engine builds on these technical advancements to deliver their economic benefits directly to our customers.

Summary and looking forward

As a research-focused company, we will continue to push the envelope of AI acceleration. The Together Inference Engine is built for extensibility and rapid iteration, enabling us to quickly add support for new models, techniques, and kernels. Our most recent research publications like FlashAttention-3 show that there is continued headroom for optimization.

Together Turbo and Together Lite endpoints are available starting today for Llama 3 models, and will be rolling out across other models soon. We are also introducing new pricing for these endpoints, available on our pricing page, and you can access them now at api.together.ai.

Together, we hope these innovations give you the flexibility to scale your applications with the performance, quality, and cost-efficiency your business demands. We can't wait to see what you build!

Suggested labels

{'label-name': 'Together Inference Engine', 'label-description': 'A high-performance inference engine optimized for generative AI applications with innovative features and endpoints.', 'gh-repo': 'Together AI', 'confidence': 50.0}

ShellLM commented 4 weeks ago

Related content

766 similarity score: 0.87

831 similarity score: 0.87

628 similarity score: 0.87

851 similarity score: 0.86

304 similarity score: 0.86

769 similarity score: 0.86