What you need to know: Software-Defined Environments (SDE) for data warehousing and more

by James Cho and Maria Attarian

There’s a new kid on the block and it’s called SDE!  This is a new term that stands for Software Defined Environment (SDE), and it is here to change the way we think about the world of application, integration and middleware – as well as data warehouses. But first things first.  Let’s talk about the SDE and how it can help you as you deliver more end-user services more easily.

What is an SDE and why use one when you have traditional environment approaches?

Put simply, a Software Defined Environment (SDE) optimizes the entire computing infrastructure — compute, storage and network resources. An SDE can automatically tailor itself to meet the needs of the workload that must be executed.

In comparison to traditional environment approaches, compute, storage, and network resources are allocated and assigned to workloads manually and this is the problem from which the need for the SDE technology emerged. In order to remove manual steps, SDEs takes into account application characteristics, best-available resources and service level policies when dynamically allocating resources to workloads. An SDE also strives to deliver continuous, “on the fly” optimization and reconfiguration to address infrastructure issues.

So what are the fundamental ingredients to doing this? Policy-based compliance checks and updates are essential and make an SDE easy to manage. Delivering in the public or private cloud requires high-speed analytic processing capabilities, as well as rapid integration, automation and optimization. When factors such as these are in place, it becomes clear that SDE technology helps accelerate business success and brings value to the customer because the solution is responsive and adaptive.

So what does IBM have to offer in the SDE space?

dashDB Local (currently in preview) is the IBM data warehouse offering for SDEs such as private clouds, virtual private clouds and other infrastructures that support the Docker container technology. It is designed to provision a full data warehouse stack in minutes and helps you manage the service in your own public or private cloud, while maintaining existing operational and security processes.

There are three design principles that dashDB Local tackles head-on based on feedback from our customers.

  1. So simple that anyone can deploy it

By packaging our software stack into a Docker container, provisioning dashDB Local can be as simple as one docker run command on Linux servers that have the Docker engine installed. It can be as easy as a Docker hub search for “dashDB” followed by a single click on “CREATE” using Docker kitematic on Windows or Mac machines. Software stack updates are as simple as your mobile app using the same docker run command against a new version of the container on your existing installation.

  1. Flexible enough to deploy anywhere

dashDB Local can be deployed on any supported Docker installations on Linux, Cloud, and on OSX and Windows platforms with minimal prerequisites. Entry level hardware requirements start at 8GB RAM and 20GB of storage, which is suitable for a development / test environment or QA work on your laptop. For larger servers like 48 core 3 TB RAM servers, the dashDB container will auto-configure to the host it is installed on. Persistent durable storage of your choice must be mounted in /mnt/clusterfs to hold your data. To summarize, it is flexible enough to empower you to use the hardware what you already have in your data center or in the public cloud of your choice.

  1. Independent of your infrastructure capabilities

Existing monitoring and security overlays can remain on your Host OS while the dashDB stack is isolated inside its container. You can fully utilize existing infrastructure capabilities like copy and replication services of your storage. Existing monitoring tools such as systems management, network monitoring, even popular cloud management tools such as openstack, kubernetes, or public cloud monitoring tools like AWS Cloudwatch can continue to be used. The isolation of a dashDB Local container allows you to embrace your own data center standards. Thus, it is independent and empowers you to do what you already know how to do.

For more information on dashDB Local, please visit the public Docker repository. An early access preview of dashDB Local is now available. Test it out and help shape the solution. Test it for yourself.  Request dashDB Local preview access here.

About James and Maria,

James ChoJames Cho is a Senior Technical Staff Member and  Chief Architect for IBM dashDB Local. He has been a technical leader of integrated warehouse solutions and appliances at IBM for over 15 years. He currently focuses on data warehouse solutions delivered in public and private cloud data centers. His previous experience includes Data Warehouse DBA, publication of Industry standard TPC performance benchmarks, and BI architecture and deployment responsibilities. James is a 1996 graduate of the University of Texas.  He holds a bachelor of science in computer science.

Follow James on LinkedIN

 maria attarianMaria Attarian has worked for IBM for the past four years on data warehousing technologies such as PureData System for Analytics and dashDB. In her focus as a software engineer, Maria is charged with new and innovative ways to deliver better products for clients and takes on a variety of challenges in this role. Maria is also active with the IEEE Young Professionals Toronto Chapter.  She served as the chairperson of this group for more than a year and remains an active member of the group.  Maria hold a master’s degree from the University of Waterloo and a bachelor of engineering degree from the National Technical University of Athens in Greece.