Increased Speed, More Options for dashDB for Analytics with Pay-As-You-Go and Bluemix Lift

by Ben Hudson

Harnessing the power of IBM dashDB for Analytics just got quicker and easier. We’re excited to introduce two new and improved ways to connect to the cloud for in-memory processing; RStudio and Cloudant integrations; in-database analytics; and other powerful features that will reduce your time to market:

  1. Pay-As-You-Go (PayGo) provisioning: Starting today, you can purchase dashDB for Analytics directly in Bluemix using your credit card*.  We’ll start provisioning your system right away, accelerating your time to value.
  2. Bluemix Lift: Now you can move your on-premises data stores into a dashDB instance even faster. Bluemix Lift, IBM’s newest data movement solution, accelerates data migration by up to 10 times versus traditional options, with the flexibility of both PayGo and subscription plans to meet your data needs.  Check the details out here.

You can also purchase dashDB for Analytics through a Bluemix subscription.  Try it out today!

About Ben,

ben-hudsonBen Hudson is an Advisory Offering Manager for IBM dashDB for Analytics. He recently obtained his Master’s degree in Computer Science from Wesleyan University in Middletown, CT.

 

*Note: dashDB for Analytics MPP Small for AWS is not available as a PayGo plan.

 

Enterprise Data Warehouse Beyond SQL with Apache Spark

By Torsten Steinbach, Lead Architect for IBM Data Warehousing Advanced Analytics

Enterprise IT infrastructure is often based heavily on relational data warehouses, where all other applications communicate through the data warehouse for analytics. There is pressure from line of business departments to use open source analytics & big data technology, such as R, Python and Spark for analytical projects and to deploy them continuously without having to wait for IT provisioning. Not being able to serve these requests can lead to proliferation of analytic silos and lost control of data. For this reason, the new IBM dashDB Local for software-defined environments (SDEs) and private clouds has now integrated a complete Apache Spark stack, enabling you to continue to operate established data warehouses and leverage its proven operational quality of service as well as running Spark-based workloads out of the box on the same data.

This tightly embedded Apache Spark environment can use the entire set of resources of the dashDB system, which also applies to the MPP scale out. Each dashDB Local node, each with its own data partition, is overlaid with a local Apache Spark executor process. The existing data partitions of the dashDB cluster are implicitly derived for the data frames in Spark and thus for any distributed parallel processing in Spark on this data.

ts_screenshotone

 

 

 

 

 

 

 

The co-location of the Spark execution capabilities with the database engine minimizes latency in accessing the data and leverages optimized local IPC mechanisms for data transfer. The benefits of this architecture become apparent when we apply standard machine learning algorithms on Spark to data in dashDB Local. Comparing a remote Spark cluster setup with a co-located setup we found that those algorithms can get a significant increase in speed. This even includes optimization in remote access to read data in parallel tasks, one for each database partition in dashDB.So you can see that there is indeed a performance advantage provided by the integrated architecture.

In addition, the Spark-enabled data warehouse engine can do a lot of things out of the box that were not possible before:

1. Out of the box data exploration & visualization

ts_screenshotone

2. Interactive Machine Learning

ts_screenshottwo

3. One-click deployment – Turning interactive notebooks into deployed Spark applications

ts_screenshotthree

ts_screenshotfour

4. dashDB as hosting environment to run your Spark applications

Once as Spark application has been deployed to dashDB it can be invoked in three different ways.

Using spark-submit.sh from command line and scripts:

ts_screenshotfive

ts_screenshotsix

Using dashDB REST API:

ts_screenshotseven

Using SPARK_SUBMIT stored procedure:

ts_screenshoteight

5. Out of the box machine learning

ts_screenshotnine

In addition, this implementation of Spark capabilities in dashDB Local provides you with a high degree of flexibility in ELT and ETL activities and let you process and land data in motion in dashDB Local.

Let’s summarize the key benefits that dashDB with integrated Apache Spark provides:

  1. dashDB Local lets you dramatically modernize your data warehouse solutions with advanced analytics based on Spark.
  2. Spark applications processing relational data gain significant performance and operational QoS benefits from being deployed and running inside dashDB Local.
  3. dashDB Local enables analytic solution creation end-to-end, from interactive exploration and machine learning experiments, verification of analytic flows, easy operationalization by creating deployed Spark applications, up to hosting Spark applications in a multi-tenant enterprise warehouse system and integrating them with other applications via various invocation APIs.
  4. dashDB Local allows you to invoke Spark logic via SQL connections.
  5. dashDB Local can land streaming data directly into tables via deployed Spark applications.
  6. dashDB Local can run complex data transformations and feature extractions that cannot be expressed with SQL using integrated Spark.

Please also check out the tutorial playlist for dashDB with Spark here: ibm.biz/BdrLNG.  You can also download a free trial version of dashDB Local at ibm.biz/dashDBLocal to see these Spark features in action for yourself.

 

About Torsten,

torsten-steinbachTorsten has worked over many years as an IBM software architect for IBM’s database software offerings with particular focus on performance monitoring, application integration and workload management. Today, Torsten is the lead architect for advanced analytics in IBM’s data warehouse products and cloud services.

IBM dashDB Local FAQ

When it comes to next-generation data warehousing and data management, the IBM dashDB family offers a range of options that all share a common technology.  dashDB Local is one of the new offerings and this blog provides a series of answers to your questions!  Please feel free to jump in at the bottom in the comments to add your own questions and we will respond.

      1.  What is the dashDB family?
        IBM dashDB is a family of next generation of database and data warehouse technologies that help you respond very quickly to application needs.  Originally in data warehousing, IT professionals assembled hardware software and storage to handle their large data sets for analytics needs.  This was risky, costly and time consuming.  This gave way to the data warehouse appliance that provided an optimized system for data warehousing and analytics.  The appliance was so successful that many consider it to be the backbone of their analytics architecture.But the world of analytics is expanding and new technologies are needed to handle more requests, more data sources and even self-service needs.  Hybrid data architectures are coming to the forefront to handle these increased needs. The dashDB family plays a key role here:

        • dashDB for analytics – as a fully managed cloud data warehouse
        • dashDB Local – as a configured data warehouse delivered via container technology to enable flexible deployment
        • dashDB for transactions – as a fully managed database as a service for transactional workloads.

        This family is designed to help you respond to new needs very quickly.  It also shares a common engine to help you leverage the same skills across different deployment models and application types. For more information on dashDB, visit dashDB.com.

      2. What is dashDB Local?
        dashDB Local is in-memory, columnar data warehousing software, supporting wide range of analytic workloads—from datamarts to enterprise data warehouses. It is deployed using Docker container technology, supporting a software defined environment such as private cloud, virtual private cloud or infrastructure of your choice, thus enabling hybrid Cloud configuration. dashDB Local can be deployed in minutes—making it fast and easy to deliver an auto configured data warehouse with built-in Netezza and Oracle compatibility.
      3. What is Docker?
        Docker is container technology that simplifies packaging and distribution of the software in a complete filesystem that contains everything needed to run: code, runtime, system tools, system libraries – anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment.
      4. What is the difference between Docker and VMware?
        VMWare is virtualization technology, while Docker is the container technology used for simplified packaging and distribution of software.   While Docker Container provides operating system-level process isolation, VMware virtualization lets you run multiple virtual machines (VMs) on a single physical server (thus providing H/W level abstraction). Unlike VMware, Docker does not create an entire virtual operating system (thus making it lightweight to deploy and faster to start up compared to VMs). Both technologies can be used together, for example, Docker containers can be created inside VMs to make a solution ultra-portable.
      5. Is dashDB Local generally available?
        Yes it is. You can find a free trial of it at ibm.biz/dashDBLocal.
      6. On which platforms is dashDB Local supported?
        Today, dashDB Local runs on any platform that Docker engine is supported such as Linux, Microsoft Windows, Apple Macintosh, and Cloud providers. More details can be found here and well as on the Docker site.
      7. On what platforms is dashDB Local supported?
        dashDB Local software is packaged and deployed using Docker container technology. Thus, dashDB Local can be installed on any platform where Docker engine client is supported. This includes Windows, Macintosh and variety of Linux platforms. Deploying IBM dashDB Local on Windows or Macintosh requires a Linux VM, in which you run Docker. By downloading the Docker Toolbox, which includes a GUI and a VM, you can accomplish this easily. Please refer to the dashDB Local knowledge center (documentation) for further details.
      8.  Is dashDB Local available on IBM Bluemix Local?
        Bluemix Local is managed by IBM on a clients’ infrastructure. There are no plans to offer dashdB Local on Bluemix Local at this time.
      9.  How long does it take to install dashDB Local?
        dashDB Local is based on Docker container technology, which allows setup and installation in less than 30 minutes. Today, the SMP version or MPP version of dashDB Local can be installed in less than 15 minutes.  This can be done on a range of servers from a simple laptop to a production-grade server. Since various components, such as LDAP security and DSM monitoring, are already bundled into a single container installation, there is a tremendous time savings and a more streamlined and efficient process.
      10. What  components are packaged and installed with  dashDB Local?
        dashDB Local is comprised of the following software components packaged in the Docker container, thus simplifying deployment and speeding up the overall setup. At the core is the dashDB analytics engine, tuned for columnar and in-memory workloads, built-in Netezza and Oracle compatibility, an LDAP server for user access management.  IBM Data Server Manager, which acts as the key monitoring component, is also included to provide key features such as query history monitoring, database performance monitoring and OS level monitoring.
      11. Is dashDB local available outside of Docker public hub?
        Soon, a standalone version of dashDB Local will be available on IBM Fix Central and will  leverage the Docker download command on the host OS. This will remove the dependency of pulling the dashDB Local image from the private, access-controlled repository on the public Docker hub. This will also alleviate challenges where a firewall port to access the public docker hub cannot be opened.
      12. What are the minimum prerequisites to install dashDB Local?
        You can find the documented prerequisites for dashDB Local in the Knowledge Center.  Some of the key requirements focus on Docker client and POSIX-compliant storage files systems, as documented. An additional key requirement revolves around opening access to the following network ports or on the firewall. Ensure that the following ports are opened and defined on all nodes in the cluster and defined in each node’s /etc/hosts file.60000-60024, for database FCM
        25000-25999, for Apache Spark
        50022, for SSH/container OS
        50001, for database connection with SSL
        50000, for database connection without SSL
        9929, for communication tests
        9300, for web console status
        8443, for web console HTTPS
        5000, for System Manager
        389, for LDAP
        22, for SSH/host OS
      13. How does Apache Spark fit into the dashDB Local architecture?
        dashDB Local lets you dramatically modernize your data warehouse solutions with advanced analytics based on Spark. It is installed and configured within the dashDB Local container, thus making it fully integrated, supporting a variety of use cases.Spark applications that process relational data can gain significant performance and operational QoS benefits from deploying and running inside dashDB Local. It enables end-to-end analytic solution creation, from interactive exploration and machine learning experiments; verification of analytic flows; easy operationalization of Spark applications through to hosting Spark applications in a multi-tenant enterprise warehouse system; and integration of Spark applications with other applications via various invocation APIs. It allows you to invoke Spark logic via SQL connections and can land streaming data directly into tables via deployed Spark applications. It can run complex data transformations and feature extractions that cannot be expressed with SQL using integrated Spark.
      14. I have a running dashdb local instance where I’ve set DISABLE_SPARK=‘YES’ in the options file. How can I tell if this option took affect and the actual memory the db is using?This can be confirmed during the startup of dashDB Local container. When Spark is disabled, you will see in the docker start output “Spark support is going to be disabled.” When Spark is enabled you will see”Current spark share : XX% of total memory. “
      15. Is the SPARK setting available as an install time switch, or can I enable and disable spark every time I start the database by changing this?
        The SPARK feature can be enabled/disabled any time. You can change the DISABLE_SPARK setting anytime again and just restart dashDB local in the container (docker exec -it dashdb stop/start).
      16. Is there currently a maximum number of nodes for dashDB Local?
        Currently, one can create up to 24 node MPP cluster in dashDB Local.
      17. Can I upgrade from the GA trial to the production version of dashDB Local?
        Yes, you can upgrade/update the post-GA trial version to a production grade version. Once the product is purchased, a permanent license can be applied to update the license.
      18. How can I get support for dashDB Local ?
        Upon purchase of dashDB Local, clients are entitled to support via email or phone. One can open a PMR with the IBMs support ticketing system. IBM will support the dashDB Local container and all of the components inside it. For Docker specific issues, clients should contact the Docker support team. The IBM support team will assist in identifying the problem and advise accordingly.
      19. If I run into docker issue, would IBM support handle it for me?
        IBM will support the “dashDB Local container”, but not the Docker engine itself. It is the client’s responsibility to subscribe to Docker support.  Customers can leverage Docker CS engine (from Docker) or Open source docker RPMs that come with the Linux distros (such as Red Hat). Docker will provide commercial support only for Docker CS engine and not for Docker RPMs and it is the customer choice of a relevant support path regarding docker components.
      20. How often are dashDB Local updates/fix packs made available?
        dashDB Local is based on an agile development cloud model and the intent is to roll out container updates frequently. This will not only make it easier to stay current with the latest bug fixes, and newest features. The updates are handled via container update process and will take less then 30 minutes, similar to the container setup.
      21. Can dashDB Local be installed on Amazon AWS or Azure/ AWS ?
        dashDB Local can be installed on the infrastructure of your choice, as long as that infrastructure supports Docker container technology. dashDB Local is client-managed and can also be installed in your data center and any virtual private cloud infrastructures such as Amazon AWS EC2 or Microsoft Azure platforms.
      22. What kind of storage is required for dashDB Local?
         dashDB Local requires a posix-compliant clustered file storage system. This is applicable for the MPP cluster only. For a standalone SMP installation, this is not a requirement. You can use standard local disks for a SMP dashDB local node setup.Cluster file system is a file system that is configured in a way to group servers and resources together to have concurrent access to a single file system. The key to a cluster file system is that the cluster appears as a single highly available system to all the end users. This increases the storage utilization rate and can result in high performance.
        Some common examples of clustered file storage system are :
        – VERITAS Cluster File System(VxFS) Sun Solaris, HP/UX
        – Generalized Parallel File System (GPFS) IBM AIX, Linux
        – GFS2 Red Hat only
      23. Is Oracle compatibility available in dashDB local?
        Yes, Oracle compatibility is supported in dashDB Local. You can enable applications that were written for an Oracle database to use dashDB™ Local without having to be rewritten. To use this capability, you must specify that dashDB Local is to run in Oracle compatibility mode prior to initial deployment.Before you begin, the /mnt/clusterfs/ directory must already be created. To perform this task, you need to have root authority on the host system OS. By default, Oracle compatibility mode is not enabled. To enable it, you explicitly make an entry in the /mnt/clusterfs/options file prior to deploying dashDB Local. Run the command below and then follow the normal steps around container deployment/initializationecho “ENABLE_ORACLE_COMPATIBILITY=’YES'” >> /mnt/clusterfs/options

        For more details:
        http://www.ibm.com/support/knowledgecenter/SS6NHC/com.ibm.swg.im.dashdb.doc/admin/local_oracompat.html
        http://www.ibm.com/support/knowledgecenter/SS6NHC/com.ibm.swg.im.dashdb.doc/admin/local_setup.html#setup

 

Three session guides get you started with data warehousing at IBM Insight at World of Watson

Join us October 24 to 27, 2016 in Las Vegas!

by Cindy Russell, IBM Data Warehouse marketing

IBM Insight has been the premiere data management and analytics event for IBM analytics technologies, and 2016 is no exception.  This year, IBM Insight is being hosted along with World of Watson and runs from October 24 to 27, 2016 at the Mandalay Bay in Las Vegas, Nevada.  It includes 1,500 sessions across a range of technologies and features keynotes by IBM President and CEO, Ginni Rometty; Senior Vice President of IBM Analytics, Bob Picciano; and other IBM Analytics and industry leaders.  Every year, we include a little fun as well, and this year the band is Imagine Dragons.

IBM data warehousing sessions will be available across the event as well as in the PureData System for Analytics Enzee Universe (Sunday, October 23).  Below are product-specific quick reference guides that enable you to see at a glance key sessions and activities, then plan your schedule.  Print these guides and take them with you or put the links to them on your phone for reference during the conference.

This year, the Expo floor is called the Cognitive Concourse, and we are located in the Monetizing Data section, Cognitive Cuisine experience area.  We’ll take you on a tour across our data warehousing products and will have some fun as we do it, so please stop by.  There is also a demo room where you can see live demos and engage with our technical experts, as well as a series of hands-on labs that let you experience our products directly.

The IBM Insight at World of Watson main web page is located here.  You can register and then use the agenda builder to create your personalized schedule.

IBM PureData System for Analytics session reference guide

Please find the session quick reference guide for PureData System for Analytics here: ibm.biz/wow_enzee

Enzee Universe is a full day of dedicated PureData System for Analytics / Netezza sessions that is held on Sunday, October 23, 2016.  To register for Enzee Universe, select sessions 3459 and 3461 in the agenda builder tool.  This event is open to any full conference pass holder.

During the regular conference, there are also more than 35 PureData, Netezza, IBM DB2 Analytics Accelerator for z/OS (IDAA) technical sessions across all the conference tracks, as well as hands on labs.  There are several session being presented by IBM clients so you can see how they put PureData System for Analytics to use.  Click the link above to see the details.

IBM dashDB Family session reference guide

Please find the session quick reference guide for the dashDB family here: ibm.biz/wow_dashDB

There are a more than 40 sessions for dashDB, including a “Meet the Family” session that will help you become familiar with new products in this family of modern data management and data warehousing tools.  There is also a “Birds of a Feather” panel discussion on Hybrid Data Warehousing, and one that describes some key use cases for dashDB.  And, you can also see a demo, take in a short theatre session or try out a hands-on lab.

IBM BigInsights, Hadoop and Spark session reference guide

Please find the session quick reference guide for BigInsights, Hadoop and Spark topics here: ibm.biz/wow_biginsights

There are more than 65 sessions related to IBM BigInsights, Hadoop and Spark, with several hands on labs and theatre sessions. There is everything from an Introduction to Data Science to Using Spark for Customer Intelligence Analytics to hybrid cloud data lakes to client stories of how they use these technologies.

Overall, it is an exciting time to be in the data warehousing and analytics space.  This conference represents a great opportunity to build depth on IBM products you already use, learn new data warehousing products, and look across IBM to learn completely new ways to employ analytics—from Watson to Internet of Things and much more.  I hope to see you there.

IBM BigInsights version 4.2 is here!

Brings Hadoop, Spark and SQL into one flexible, open analytics platform

by Andrea Braida

Today, we are pleased to announce that IBM BigInsights® 4.2 is generally available. BigInsights 4.2 is built on IBM Open Platform (IOP), IBM’s big data platform with Apache Spark and Apache Hadoop. IOP offers the ideal combination of Apache components to support big data applications. The BigInsights 4.2 release puts the full range of analytics for Hadoop, Spark and SQL into the hands of advanced analytics and data science teams on a single platform.

IBM has deep Hadoop expertise, and in the last year, has moved into a very strong Apache Spark leadership position as well. IBM is integrating and embedding Spark across its analytics portfolio, which means that customers get Spark in any way they want it. No one else in the market is doing this today. (BigInsights 4.2 also includes comprehensive machine language support – Spark, SystemML and integration with H2O.)

If a recommended Hadoop distribution is something you’re interested in, the most significant release features, including Spark integration, are summarized for you below.

What’s new in BigInsights 4.2?

BigInsights 4.2 introduces a range of new capabilities that make it more open, flexible and powerful:

Integration with Apache Spark 1.6.1

Access the processing and analytics power of Spark, which includes dramatically speeding up batch and ETL processing times with the Spark Core, near real-time analytics with Spark Streaming, built-in machine learning libraries which are highly extensible using Spark MLlib, querying of unstructured data and more value from free-form text analytics with Spark SQL, and graph computation/graph analytics with Spark GraphX.

IBM Big SQL enhancements for RDBMS offload and consolidation.
Big SQL now understands SQL dialects from other vendors and products, such as Oracle, IBM DB2® and IBM Netezza®, making it the ultimate platform for RDBMS offload and consolidation. It is faster and easier to offload old data from existing enterprise data warehouses or data marts to free up capacity while preserving most of the familiar SQL from those platforms. BigSQL is also the only SQL engine for Hadoop that exploits Hive, HBase, and Spark concurrently for best in class analytic capabilities.

New Apache components and currency updates to existing components
BigInsights 4.2 now includes Apache Ranger, Apache Phoenix and Apache Titan. BigInsights is currently the only Hadoop distribution with Graph Database. Notable currency updates include updates to Ambari, Kafka, and SOLR.

ODPI Runtime Certification
With V4.2, IOP is among the first Hadoop platforms to comply with the Open Data Platform (ODPi) Runtime Certification. This means it is easier for independent software vendors to adopt IOP as a platform, and ensures platform openness for customers.

Introducing IBM Big Replicate
IBM Big Replicate provides continuous availability and data consistency via a patented active-transactional replication technology which also provides streaming backup, hybrid cloud, and burst-to-cloud. This is an optimized data replication capability for uninterrupted migration between different distributions to IBM, cloud to on-prem, and vice versa.

Why should you consider BigInsights 4.2?

Some key standout features for BigInsights 4.2 are BigSQL performance imporvements, deeper analytics with Spark and Graph Database, and a more open and secure platform.

BigSQL performance improvements

BigSQL is the SQL query engine in BigInsights.  New performance improvements make it super fast, and super easy to install and manage. These enhancements to BigSQL 4.2 result in significant performance improvements:

  • Built-in components improve performance with less tuning (auto-analyze)
  • Improved memory management and operational stability
  • High performance transactional support is now included
  • Apache Phoenix provides easier access to Hbase with a SQL interface
  • In Technology Preview, in-memory technology (BLU Acceleration) on Big SQL head nodes is now available for faster processing

These enhancements make BigInsights an ideal platform for RDBMS off-load and consolidation, as well as a hybrid engine that can help you exploit fit-for-purpose Hadoop subsystems.

Deeper and improved analytics with Spark and Graph Database

  • Easier and richer text analytics
  • New AQL Editor makes it easier to migrate existing AQL to V4.2
  • Web-based, drag-and-drop development
  • Powerful, expressive, AQL language to get more done, with less work
  • New run-on-cluster with Spark
  • Pre-built extractors: Named Entity, Financial, Sentiment, Machine Data
  • Graph Database – Titan
  • IOP is the first Hadoop distribution to include a graph database in its distribution

More open and more secure

  • For security, BigInsights 4.2 is compliant with industry standards, and includes Apache Ranger which provides centralized security management and auditing of users and the REST interface. It supports HDFS, YARN, Hive, HBase, and Kafka, allowing users to focus more time on analyzing data versus worrying about security.
  • BigInsights now enables easy product integration with ODPI Runtime Certification. With V4.2, IOP is among the first Hadoop platforms to comply with the Open Data Platform (ODPi) Runtime Certification. This means it is easier for independent software vendors  to adopt IOP as a platform, and it ensures platform openness for clients.

The BigInsights’ core –  IBM Open Platform (IOP) – was designed with a focus on analytics, operational excellence, and security empowerment, and is certified by the Open Data Platform Initiative (ODPi).

Get started free

BigInsights is available on-premises, on-cloud, and is integrated with other systems in use today, with enterprise-class support available. (Please note that BigQuality, BigIntegrate, Phoenix, Ranger, Solr, and Titan are available on BigInsights on-premises only, and are planned for the on-cloud offering.*)

BigInsights is also integrated with a broad and open ecosystem of data and analytics tools, allowing for a true hybrid architecture. BigInsights on Cloud was recently ranked as a leader in the Hadoop Cloud services market by Forrester, which I’ll share more about in my next blog.

BigInsights_Logo

 

Get started with a free version of the BigInsights core, IBM Open Platform (IOP). Click here.

And for more information about the 4.2 release, please visit our release overview or refer to the Big Replicate overview.  Or visit the Hadoop solutions page.

About Andrea,

andrea braida_croppedAndrea Braida is a Portfolio Marketing Manager at IBM for Big Data Analytics and Data Science offerings. A former start-up founder, she has extensive product management, product marketing, and data science marketing experience within both global technology giants and start-ups. Andrea is based in Seattle, Washington.

* The information contained in this presentation is provided for informational purposes only.

While efforts were made to verify the completeness and accuracy of the information contained in this presentation, it is provided “as is”, without warranty of any kind, express or implied. In addition, this information is based on IBM’s current product plans and strategy, which are subject to change by IBM without notice. IBM shall not be responsible for any damages arising out of the use of, or otherwise related to, this presentation or any other documentation. Nothing contained in this presentation is intended to, or shall have the effect of: 1) Creating any warranty or representation from IBM (or its affiliates or its or their suppliers and/or licensors); or 2) altering the terms and conditions of the applicable license agreement governing the use of IBM software.
Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment.  The actual throughput or performance that any user will experience will vary depending upon many factors, including considerations such as the amount of multi-programming in the user’s job stream, the I/O configuration, the storage configuration, and the workload processed.  Therefore, no assurance can be given that an individual user will achieve results similar to those stated here.

IBM dashDB Local opens its preview for data warehousing on private clouds and more!

by Mitesh Shah

Just like in the story of Goldilocks … you may be looking for modern data warehousing that is “just right.”  Your IT strategy may include cloud and you may like the simplicity and scalability benefits of cloud … yet some data and applications may need to stay on-premises for a variety of reasons.  Traditional data warehouses provide essential analytics, yet they may not be right for new types of analytics, data born on the cloud, or simply cannot contain a growing workload of new requests.

IBM dashDB Local is an open preview technology that is designed to give you “just right” cloud-like simplicity and flexibility.  It delivers a configured data warehouse in a Docker container that you can deploy wherever you need it as long Docker is supported on that infrastructure. Often, this is a private cloud, virtual private cloud (AWS/Azure), or other software-defined infrastructure. You gain management simplicity and have an environment that you can control more directly.

DownloadFromDocker
Download and install dashDB Local quickly and simply via Docker container technology.

dashDB Local may be the right choice when you have complex applications that must be readied for cloud, have SLAs or regulations that require data or applications to stay on premises, or you need to address new analytics requests very quickly with easy scale in and out capabilities.

dashDB Local complements the dashDB data warehouse as a service offering that is delivered via IBM Bluemix. Because both products are based on a common database technology, you can move workloads across these editions without costly and complex application change!   This is one example of how we define a hybrid data warehouse and how it can help improve your flexibility over time as your needs evolve.

Since dashDB Local began its closed preview in February of 2016, the team has rallied to bring in a comprehensive set of data warehousing features to this edition of dashDB. We have been listening to the encouraging feedback from our initial preview participants, and as a result, we now have a solution that is open for you to test!

So what are you waiting for?

It’s become commonplace for us to hear feedback that participants can deploy a full MPP data warehouse offering with in-memory processing and columnar capabilities, on the infrastructure of our choice, within 15-20 minutes.

Ours early adopters have been fascinated by the power and ease of deployment for the Docker container.  It’s become commonplace for us to hear feedback that participants can deploy a full MPP data warehouse offering with in-memory processing and columnar capabilities, on the infrastructure of our choice, within 15-20 minutes. One client said that dashDB Local is as easy to deploy and manage as a mobile app! We are thrilled by this type of feedback!

Workload monitoring in dashDB Local delivers elasticity to scale out or in.
Workload monitoring in dashDB Local delivers elasticity to scale out or in.

The open preview (v. 0.5.0) offers extreme scale out and scale in capabilities. Yes, you heard me right. Scale-in provides the elasticity to not tie up your valuable resources beyond the peak workloads. This maximizes return on investment for your variable reporting and analytics solutions.  The open preview will help you test drive the Netezza compatibility (IBM PureData System for Analytics) within dashDB technology, as well as analytics support using RStudio. Automated High Availability is another attractive feature that is provided out-of-the box for you to see and test.

Preview participants have been eager to test drive query performance. One participant says, “We are very impressed with the performance, and within no time we have grown our dataset of 40 million to 200 million records (a few TBs) and the analytics test queries run effortless.” Our participants are leveraging their data center infrastructure whether it’s bare metal or virtualized (VMs) to get started and some have installed it on their laptops to quickly gain an understanding of this preview.

Register for dashDB Local previewFind out how it can be “just right” for you!  Go here to give it a try and get ready to be wowed.  We value and need your feedback to help us prioritize features that are important to your business.  All the best and don’t hesitate to drop me a line to let me know what you think!


About Mitesh,

MiteshMitesh Shah is the product manager for the new dashDB Local data warehousing solution as a software-defined environment (SDE) that can be used on private clouds and platforms that support Docker container technology. He has broad experience around various facets of software development revolving around relational databases and data warehousing technologies.  Throughout his career, Mitesh has enjoyed a focus on helping clients address their data management and solution architecture needs.

What you need to know: Software-Defined Environments (SDE) for data warehousing and more

by James Cho and Maria Attarian

There’s a new kid on the block and it’s called SDE!  This is a new term that stands for Software Defined Environment (SDE), and it is here to change the way we think about the world of application, integration and middleware – as well as data warehouses. But first things first.  Let’s talk about the SDE and how it can help you as you deliver more end-user services more easily.

What is an SDE and why use one when you have traditional environment approaches?

Put simply, a Software Defined Environment (SDE) optimizes the entire computing infrastructure — compute, storage and network resources. An SDE can automatically tailor itself to meet the needs of the workload that must be executed.

In comparison to traditional environment approaches, compute, storage, and network resources are allocated and assigned to workloads manually and this is the problem from which the need for the SDE technology emerged. In order to remove manual steps, SDEs takes into account application characteristics, best-available resources and service level policies when dynamically allocating resources to workloads. An SDE also strives to deliver continuous, “on the fly” optimization and reconfiguration to address infrastructure issues.

So what are the fundamental ingredients to doing this? Policy-based compliance checks and updates are essential and make an SDE easy to manage. Delivering in the public or private cloud requires high-speed analytic processing capabilities, as well as rapid integration, automation and optimization. When factors such as these are in place, it becomes clear that SDE technology helps accelerate business success and brings value to the customer because the solution is responsive and adaptive.

So what does IBM have to offer in the SDE space?

dashDB Local (currently in preview) is the IBM data warehouse offering for SDEs such as private clouds, virtual private clouds and other infrastructures that support the Docker container technology. It is designed to provision a full data warehouse stack in minutes and helps you manage the service in your own public or private cloud, while maintaining existing operational and security processes.

There are three design principles that dashDB Local tackles head-on based on feedback from our customers.

  1. So simple that anyone can deploy it

By packaging our software stack into a Docker container, provisioning dashDB Local can be as simple as one docker run command on Linux servers that have the Docker engine installed. It can be as easy as a Docker hub search for “dashDB” followed by a single click on “CREATE” using Docker kitematic on Windows or Mac machines. Software stack updates are as simple as your mobile app using the same docker run command against a new version of the container on your existing installation.

  1. Flexible enough to deploy anywhere

dashDB Local can be deployed on any supported Docker installations on Linux, Cloud, and on OSX and Windows platforms with minimal prerequisites. Entry level hardware requirements start at 8GB RAM and 20GB of storage, which is suitable for a development / test environment or QA work on your laptop. For larger servers like 48 core 3 TB RAM servers, the dashDB container will auto-configure to the host it is installed on. Persistent durable storage of your choice must be mounted in /mnt/clusterfs to hold your data. To summarize, it is flexible enough to empower you to use the hardware what you already have in your data center or in the public cloud of your choice.

  1. Independent of your infrastructure capabilities

Existing monitoring and security overlays can remain on your Host OS while the dashDB stack is isolated inside its container. You can fully utilize existing infrastructure capabilities like copy and replication services of your storage. Existing monitoring tools such as systems management, network monitoring, even popular cloud management tools such as openstack, kubernetes, or public cloud monitoring tools like AWS Cloudwatch can continue to be used. The isolation of a dashDB Local container allows you to embrace your own data center standards. Thus, it is independent and empowers you to do what you already know how to do.

For more information on dashDB Local, please visit the public Docker repository. An early access preview of dashDB Local is now available. Test it out and help shape the solution. Test it for yourself.  Request dashDB Local preview access here.

About James and Maria,

James ChoJames Cho is a Senior Technical Staff Member and  Chief Architect for IBM dashDB Local. He has been a technical leader of integrated warehouse solutions and appliances at IBM for over 15 years. He currently focuses on data warehouse solutions delivered in public and private cloud data centers. His previous experience includes Data Warehouse DBA, publication of Industry standard TPC performance benchmarks, and BI architecture and deployment responsibilities. James is a 1996 graduate of the University of Texas.  He holds a bachelor of science in computer science.

Follow James on LinkedIN

 maria attarianMaria Attarian has worked for IBM for the past four years on data warehousing technologies such as PureData System for Analytics and dashDB. In her focus as a software engineer, Maria is charged with new and innovative ways to deliver better products for clients and takes on a variety of challenges in this role. Maria is also active with the IEEE Young Professionals Toronto Chapter.  She served as the chairperson of this group for more than a year and remains an active member of the group.  Maria hold a master’s degree from the University of Waterloo and a bachelor of engineering degree from the National Technical University of Athens in Greece.