By Torsten Steinbach, Lead Architect for IBM Data Warehousing Advanced Analytics
Enterprise IT infrastructure is often based heavily on relational data warehouses, where all other applications communicate through the data warehouse for analytics. There is pressure from line of business departments to use open source analytics & big data technology, such as R, Python and Spark for analytical projects and to deploy them continuously without having to wait for IT provisioning. Not being able to serve these requests can lead to proliferation of analytic silos and lost control of data. For this reason, the new IBM dashDB Local for software-defined environments (SDEs) and private clouds has now integrated a complete Apache Spark stack, enabling you to continue to operate established data warehouses and leverage its proven operational quality of service – as well as running Spark-based workloads out of the box on the same data.
This tightly embedded Apache Spark environment can use the entire set of resources of the dashDB system, which also applies to the MPP scale out. Each dashDB Local node, each with its own data partition, is overlaid with a local Apache Spark executor process. The existing data partitions of the dashDB cluster are implicitly derived for the data frames in Spark and thus for any distributed parallel processing in Spark on this data.
The co-location of the Spark execution capabilities with the database engine minimizes latency in accessing the data and leverages optimized local IPC mechanisms for data transfer. The benefits of this architecture become apparent when we apply standard machine learning algorithms on Spark to data in dashDB Local. Comparing a remote Spark cluster setup with a co-located setup we found that those algorithms can get a significant increase in speed. This even includes optimization in remote access to read data in parallel tasks, one for each database partition in dashDB.So you can see that there is indeed a performance advantage provided by the integrated architecture.
In addition, the Spark-enabled data warehouse engine can do a lot of things out of the box that were not possible before:
|1. Out of the box data exploration & visualization
|2. Interactive Machine Learning
|3. One-click deployment – Turning interactive notebooks into deployed Spark applications
|4. dashDB as hosting environment to run your Spark applications
Once as Spark application has been deployed to dashDB it can be invoked in three different ways.
|5. Out of the box machine learning
In addition, this implementation of Spark capabilities in dashDB Local provides you with a high degree of flexibility in ELT and ETL activities and let you process and land data in motion in dashDB Local.
Let’s summarize the key benefits that dashDB with integrated Apache Spark provides:
- dashDB Local lets you dramatically modernize your data warehouse solutions with advanced analytics based on Spark.
- Spark applications processing relational data gain significant performance and operational QoS benefits from being deployed and running inside dashDB Local.
- dashDB Local enables analytic solution creation end-to-end, from interactive exploration and machine learning experiments, verification of analytic flows, easy operationalization by creating deployed Spark applications, up to hosting Spark applications in a multi-tenant enterprise warehouse system and integrating them with other applications via various invocation APIs.
- dashDB Local allows you to invoke Spark logic via SQL connections.
- dashDB Local can land streaming data directly into tables via deployed Spark applications.
- dashDB Local can run complex data transformations and feature extractions that cannot be expressed with SQL using integrated Spark.
Please also check out the tutorial playlist for dashDB with Spark here: ibm.biz/BdrLNG. You can also download a free trial version of dashDB Local at ibm.biz/dashDBLocal to see these Spark features in action for yourself.
Torsten has worked over many years as an IBM software architect for IBM’s database software offerings with particular focus on performance monitoring, application integration and workload management. Today, Torsten is the lead architect for advanced analytics in IBM’s data warehouse products and cloud services.