Skip to main content
<
Page 31
>

Apache Spark 1.1: The State of Spark Streaming

September 16, 2014 by Tathagata Das and Patrick Wendell in
With Apache Spark 1.1 recently released, we’d like to take this occasion to feature one of the most popular Spark components - Spark...

Announcing Apache Spark 1.1

September 11, 2014 by Patrick Wendell in
Today we’re thrilled to announce the release of Apache Spark 1.1! Apache Spark 1.1 introduces many new features along with scale and stability improvements. This post will introduce some key features of Apache Spark 1.1 and provide context on the priorities of Spark for this and the next release.

Statistics Functionality in Apache Spark 1.1

One of our philosophies in Apache Spark is to provide rich and friendly built-in libraries so that users can easily assemble data pipelines. With Spark, and MLlib in particular, quickly gaining traction among data scientists and machine learning practitioners, we’re observing a growing demand for data analysis support outside of model fitting. To address this need, we have started to add scalable implementations of common statistical functions to facilitate v

Mining Ecommerce Graph Data with Apache Spark at Alibaba Taobao

August 14, 2014 by Andy Huang and Wei Wu in
This is a guest blog post from our friends at Alibaba Taobao. Alibaba Taobao operates one of the world’s largest e-commerce platforms. We collect hundreds of petabytes of data on this platform and use Apache Spark to analyze these enormous amounts of data. Alibaba Taobao probably runs some of the largest Spark jobs in the world. For example, some Spark jobs run for weeks to perform feature extraction on petabytes of image data. In this blog post, we share our

Scalable Collaborative Filtering with Apache Spark MLlib

July 22, 2014 by Burak Yavuz and Reynold Xin in
Recommendation systems are among the most popular applications of machine learning. The idea is to predict whether a customer would like a certain item: a product, a movie, or a song. Scale is a key concern for recommendation systems, since computational complexity increases with the size of a company's customer base. In this blog post, we discuss how Apache Spark MLlib enables building recommendation models from billions of records in just a few lines of Pyt

Distributing the Singular Value Decomposition with Apache Spark

July 21, 2014 by Li Pu and Reza Zadeh in
Guest post by Li Pu from Twitter and Reza Zadeh from Databricks on their recent contribution to Apache Spark's machine learning library. The...

The State of Apache Spark in 2014

July 18, 2014 by Matei Zaharia in
This post originally appeared in insideBIGDATA and is reposted here with permission. With the second Spark Summit behind us, we wanted to take...

New Features in MLlib in Apache Spark 1.0

July 16, 2014 by Xiangrui Meng in
MLlib is an Apache Spark component focusing on machine learning. It became a standard component of Spark in version 0.8 (Sep 2013). The...

Shark, Spark SQL, Hive on Spark, and the future of SQL on Apache Spark

July 1, 2014 by Reynold Xin in
With the introduction of Spark SQL and the new Hive on Apache Spark effort ( HIVE-7292 ), we get asked a lot about...

Exciting Performance Improvements on the Horizon for Spark SQL

Read Rise of the Data Lakehouse to explore why lakehouses are the data architecture of the future with the father of the data...