IBM dashDB Local FAQ

When it comes to next-generation data warehousing and data management, the IBM dashDB family offers a range of options that all share a common technology.  dashDB Local is one of the new offerings and this blog provides a series of answers to your questions!  Please feel free to jump in at the bottom in the comments to add your own questions and we will respond.

      1.  What is the dashDB family?
        IBM dashDB is a family of next generation of database and data warehouse technologies that help you respond very quickly to application needs.  Originally in data warehousing, IT professionals assembled hardware software and storage to handle their large data sets for analytics needs.  This was risky, costly and time consuming.  This gave way to the data warehouse appliance that provided an optimized system for data warehousing and analytics.  The appliance was so successful that many consider it to be the backbone of their analytics architecture.But the world of analytics is expanding and new technologies are needed to handle more requests, more data sources and even self-service needs.  Hybrid data architectures are coming to the forefront to handle these increased needs. The dashDB family plays a key role here:

        • dashDB for analytics – as a fully managed cloud data warehouse
        • dashDB Local – as a configured data warehouse delivered via container technology to enable flexible deployment
        • dashDB for transactions – as a fully managed database as a service for transactional workloads.

        This family is designed to help you respond to new needs very quickly.  It also shares a common engine to help you leverage the same skills across different deployment models and application types. For more information on dashDB, visit

      2. What is dashDB Local?
        dashDB Local is in-memory, columnar data warehousing software, supporting wide range of analytic workloads—from datamarts to enterprise data warehouses. It is deployed using Docker container technology, supporting a software defined environment such as private cloud, virtual private cloud or infrastructure of your choice, thus enabling hybrid Cloud configuration. dashDB Local can be deployed in minutes—making it fast and easy to deliver an auto configured data warehouse with built-in Netezza and Oracle compatibility.
      3. What is Docker?
        Docker is container technology that simplifies packaging and distribution of the software in a complete filesystem that contains everything needed to run: code, runtime, system tools, system libraries – anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment.
      4. What is the difference between Docker and VMware?
        VMWare is virtualization technology, while Docker is the container technology used for simplified packaging and distribution of software.   While Docker Container provides operating system-level process isolation, VMware virtualization lets you run multiple virtual machines (VMs) on a single physical server (thus providing H/W level abstraction). Unlike VMware, Docker does not create an entire virtual operating system (thus making it lightweight to deploy and faster to start up compared to VMs). Both technologies can be used together, for example, Docker containers can be created inside VMs to make a solution ultra-portable.
      5. Is dashDB Local generally available?
        Yes it is. You can find a free trial of it at
      6. On which platforms is dashDB Local supported?
        Today, dashDB Local runs on any platform that Docker engine is supported such as Linux, Microsoft Windows, Apple Macintosh, and Cloud providers. More details can be found here and well as on the Docker site.
      7. On what platforms is dashDB Local supported?
        dashDB Local software is packaged and deployed using Docker container technology. Thus, dashDB Local can be installed on any platform where Docker engine client is supported. This includes Windows, Macintosh and variety of Linux platforms. Deploying IBM dashDB Local on Windows or Macintosh requires a Linux VM, in which you run Docker. By downloading the Docker Toolbox, which includes a GUI and a VM, you can accomplish this easily. Please refer to the dashDB Local knowledge center (documentation) for further details.
      8.  Is dashDB Local available on IBM Bluemix Local?
        Bluemix Local is managed by IBM on a clients’ infrastructure. There are no plans to offer dashdB Local on Bluemix Local at this time.
      9.  How long does it take to install dashDB Local?
        dashDB Local is based on Docker container technology, which allows setup and installation in less than 30 minutes. Today, the SMP version or MPP version of dashDB Local can be installed in less than 15 minutes.  This can be done on a range of servers from a simple laptop to a production-grade server. Since various components, such as LDAP security and DSM monitoring, are already bundled into a single container installation, there is a tremendous time savings and a more streamlined and efficient process.
      10. What  components are packaged and installed with  dashDB Local?
        dashDB Local is comprised of the following software components packaged in the Docker container, thus simplifying deployment and speeding up the overall setup. At the core is the dashDB analytics engine, tuned for columnar and in-memory workloads, built-in Netezza and Oracle compatibility, an LDAP server for user access management.  IBM Data Server Manager, which acts as the key monitoring component, is also included to provide key features such as query history monitoring, database performance monitoring and OS level monitoring.
      11. Is dashDB local available outside of Docker public hub?
        Soon, a standalone version of dashDB Local will be available on IBM Fix Central and will  leverage the Docker download command on the host OS. This will remove the dependency of pulling the dashDB Local image from the private, access-controlled repository on the public Docker hub. This will also alleviate challenges where a firewall port to access the public docker hub cannot be opened.
      12. What are the minimum prerequisites to install dashDB Local?
        You can find the documented prerequisites for dashDB Local in the Knowledge Center.  Some of the key requirements focus on Docker client and POSIX-compliant storage files systems, as documented. An additional key requirement revolves around opening access to the following network ports or on the firewall. Ensure that the following ports are opened and defined on all nodes in the cluster and defined in each node’s /etc/hosts file.60000-60024, for database FCM
        25000-25999, for Apache Spark
        50022, for SSH/container OS
        50001, for database connection with SSL
        50000, for database connection without SSL
        9929, for communication tests
        9300, for web console status
        8443, for web console HTTPS
        5000, for System Manager
        389, for LDAP
        22, for SSH/host OS
      13. How does Apache Spark fit into the dashDB Local architecture?
        dashDB Local lets you dramatically modernize your data warehouse solutions with advanced analytics based on Spark. It is installed and configured within the dashDB Local container, thus making it fully integrated, supporting a variety of use cases.Spark applications that process relational data can gain significant performance and operational QoS benefits from deploying and running inside dashDB Local. It enables end-to-end analytic solution creation, from interactive exploration and machine learning experiments; verification of analytic flows; easy operationalization of Spark applications through to hosting Spark applications in a multi-tenant enterprise warehouse system; and integration of Spark applications with other applications via various invocation APIs. It allows you to invoke Spark logic via SQL connections and can land streaming data directly into tables via deployed Spark applications. It can run complex data transformations and feature extractions that cannot be expressed with SQL using integrated Spark.
      14. I have a running dashdb local instance where I’ve set DISABLE_SPARK=‘YES’ in the options file. How can I tell if this option took affect and the actual memory the db is using?This can be confirmed during the startup of dashDB Local container. When Spark is disabled, you will see in the docker start output “Spark support is going to be disabled.” When Spark is enabled you will see”Current spark share : XX% of total memory. “
      15. Is the SPARK setting available as an install time switch, or can I enable and disable spark every time I start the database by changing this?
        The SPARK feature can be enabled/disabled any time. You can change the DISABLE_SPARK setting anytime again and just restart dashDB local in the container (docker exec -it dashdb stop/start).
      16. Is there currently a maximum number of nodes for dashDB Local?
        Currently, one can create up to 24 node MPP cluster in dashDB Local.
      17. Can I upgrade from the GA trial to the production version of dashDB Local?
        Yes, you can upgrade/update the post-GA trial version to a production grade version. Once the product is purchased, a permanent license can be applied to update the license.
      18. How can I get support for dashDB Local ?
        Upon purchase of dashDB Local, clients are entitled to support via email or phone. One can open a PMR with the IBMs support ticketing system. IBM will support the dashDB Local container and all of the components inside it. For Docker specific issues, clients should contact the Docker support team. The IBM support team will assist in identifying the problem and advise accordingly.
      19. If I run into docker issue, would IBM support handle it for me?
        IBM will support the “dashDB Local container”, but not the Docker engine itself. It is the client’s responsibility to subscribe to Docker support.  Customers can leverage Docker CS engine (from Docker) or Open source docker RPMs that come with the Linux distros (such as Red Hat). Docker will provide commercial support only for Docker CS engine and not for Docker RPMs and it is the customer choice of a relevant support path regarding docker components.
      20. How often are dashDB Local updates/fix packs made available?
        dashDB Local is based on an agile development cloud model and the intent is to roll out container updates frequently. This will not only make it easier to stay current with the latest bug fixes, and newest features. The updates are handled via container update process and will take less then 30 minutes, similar to the container setup.
      21. Can dashDB Local be installed on Amazon AWS or Azure/ AWS ?
        dashDB Local can be installed on the infrastructure of your choice, as long as that infrastructure supports Docker container technology. dashDB Local is client-managed and can also be installed in your data center and any virtual private cloud infrastructures such as Amazon AWS EC2 or Microsoft Azure platforms.
      22. What kind of storage is required for dashDB Local?
         dashDB Local requires a posix-compliant clustered file storage system. This is applicable for the MPP cluster only. For a standalone SMP installation, this is not a requirement. You can use standard local disks for a SMP dashDB local node setup.Cluster file system is a file system that is configured in a way to group servers and resources together to have concurrent access to a single file system. The key to a cluster file system is that the cluster appears as a single highly available system to all the end users. This increases the storage utilization rate and can result in high performance.
        Some common examples of clustered file storage system are :
        – VERITAS Cluster File System(VxFS) Sun Solaris, HP/UX
        – Generalized Parallel File System (GPFS) IBM AIX, Linux
        – GFS2 Red Hat only
      23. Is Oracle compatibility available in dashDB local?
        Yes, Oracle compatibility is supported in dashDB Local. You can enable applications that were written for an Oracle database to use dashDB™ Local without having to be rewritten. To use this capability, you must specify that dashDB Local is to run in Oracle compatibility mode prior to initial deployment.Before you begin, the /mnt/clusterfs/ directory must already be created. To perform this task, you need to have root authority on the host system OS. By default, Oracle compatibility mode is not enabled. To enable it, you explicitly make an entry in the /mnt/clusterfs/options file prior to deploying dashDB Local. Run the command below and then follow the normal steps around container deployment/initializationecho “ENABLE_ORACLE_COMPATIBILITY=’YES'” >> /mnt/clusterfs/options

        For more details:


IBM Fluid Query 1.7 is Here!

by Doug Dailey

IBM Fluid Query offers a wide range of capabilities to help your business adapt to a hybrid data architecture and more importantly it helps you bridge across “data silos” for deeper insights that leverage more data.   Fluid Query is a standard entitlement included with the Netezza Platform Software suite for PureData for Analytics (formerly Netezza). Fluid Query release 1.7 is now available, and you can learn more about its features below.

Why should you consider Fluid Query?

It offers many possible uses for solving business problems in your business. Here are a few ideas:
• Discover and explore “Day Zero” data landing in your Hadoop environment
• Query data from multiple cross-enterprise repositories to understand relationships
• Access structured data from common sources like Oracle, SQL Server, MySQL, and PostgreSQL
• Query historical data on Hadoop via Hive, BigInsights Big SQL or Impala
• Derive relationships between data residing on Hadoop, the cloud and on-premises
• Offload colder data from PureData System for Analytics to Hadoop to free capacity
• Drive business continuity through low fidelity disaster recovery solution on Hadoop
• Backup your database or a subset of data to Hadoop in an immutable format
• Incrementally feed analytics side-cars residing on Hadoop with dimensional data

By far, the most prominent use for Fluid Query for a data warehouse administrator is that of warehouse augmentation, capacity relief and replicating analytics side-cars for analysts and scientists.

New: Hadoop connector support for Hadoop file formats to increase flexibility

IBM Fluid Query 1.7 ushers in greater flexibility for Hadoop users with support for popular file formats typically used with HDFS.Fluid query 1.7 connector picture These include popular data storage formats like AVRO, Parquet, ORC and RC that are often used to manage bigdata in a Hadoop environment.

Choosing the best format and compression mode can result in drastic differences in performance and storage on disk. A file format that doesn’t support flexible schema evolution can result in a processing penalty when making simple changes to a table. Let’s just  say that if you live in the Hadoop domain, you know exactly what I am speaking of. For instance, if you want to use AVRO, do your tools have readers and writers that are compatible? If you are using IMPALA, do you know that it doesn’t support ORC, or that Hortonworks and Hive-Stinger don’t play well with Parquet? Double check your needs and tool sets before diving into these popular format types.

By providing support for these popular formats,  Fluid Query allows you to import, store, and access this data through local tools and utilities on HDFS. But here is where it gets interesting in Fluid Query 1.7: you can also query data in these formats through the Hadoop connector provided with IBM Fluid Query, without any change to your SQL!

New: Robust connector templates

In addition, Fluid Query 1.7 now makes available a more robust set of connector templates that are designed to help you jump start use of Fluid Query. You may recall we provided support for a generic connector in our prior release that allows you to configure and connect to any structured data store via JDBC. We are offering pre-defined templates with the 1.7 release so you can get up and running more quickly. In cases where there are differences in user data type mapping, we also provide mapping files to simplify access.  If you have your own favorite database, you can use our generic connector, along with any of the provided templates as a basis for building a new connector for your specific needs. There are templates for Oracle, Teradata, SQL Server, MySQL, PostgreSQL, Informix, and MapR for Hive.

Again, the primary focus for Fluid Query is to deliver open data access across your ecosystem. Whether the data resides on disk, in-memory, in the Cloud or on Hadoop, we strive to enable your business to be open for data. We recognize that you are up against significant challenges in meeting demands of the business and marketplace, with one of the top priorities around access and federation.

New: Data movement advances

Moving data is not the best choice. Businesses spend quite a bit of effort ingesting data, staging the data, scrubbing, prepping and scoring the data for consumption for business users. This is costly process. As we move closer and closer to virtualization, the goal is to move the smallest amount of data possible, while you access and query only the data you need. So not only is access paramount, but your knowledge of the data in your environment is crucial to efficiently using it.

Fluid Query does offer data movement capability through what we call Fast Data Movement. Focusing on the pipe between PDA and Hadoop, we offer a high speed transfer tool that allows you to transfer data between these two environments very efficiently and securely. You have control over the security, compression, format and where clause (DB, table, filtered data). A key benefit is our ability to transfer data in our proprietary binary format. This enables orders of magnitude performance over Sqoop, when you do have to move data.

Fluid Query 1.7 also offers some additional benefits:
• Kerberos support for our generic database connector
• Support for BigInsights Big SQL during import (automatically synchronizes Hive and Big SQL on import)
• Varchar and String mapping improvements
• Import of nz.fq.table parameter now supports a combination of multiple schemas and tables
• Improved date handling
• Improved validation for NPS and Hadoop environment (connectors and import/export)
• Support for BigInsights 4.1 and Cloudera 5.5.1
• A new Best Practices User Guide, plus two new Tutorials

You can download this from IBM’s Fix Central or the Netezza Developer’s Network for use with the Netezza Emulator through our non-warranted software.


Take a test drive today!

About Doug,
Doug Daily
Doug has over 20 years combined technical & management experience in the software industry with emphasis in customer service and more recently product management.He is currently part of a highly motivated product management team that is both inspired by and passionate about the IBM PureData System for Analytics product portfolio.

Using Docker containers for software-defined environments or private cloud implementations

by Mitesh Shah

Data warehousing architectures have evolved considerably over recent years. As businesses try to derive insight as the basis of value creation, ALL roles must participate by leveraging new insights.  As a result, analytics needs are expanding, markets are transforming and new business models are being created.  This ushers in increased requirements for self-service analytics and alternative infrastructure solutions. Read on to learn how the “software-defined environment” (SDE) that utilizes container technology can help you meet expanded analytics needs.

Adaptability delivered through software-defined environments

From an avalanche of new data, to mobile computing and cloud-based platforms, new technologies must move into the IT infrastructure very quickly. Traditional IT systems—hampered by labor-intensive management and high costs—are struggling to keep up. IT organizations are caught between complex security requirements, extreme data volumes and the need for rapid deployment of new services. A simpler, more adaptive and more responsive IT infrastructure is required.

One of the key solutions on the horizon is  the SDE which optimizes the entire computing infrastructure – compute, storage and network resources – so that IT staff can adapt to different types of workloads very quickly. For example, without an SDE, resources are assigned manually to workloads; the same assignments happens automatically within an SDE.

Now, dashDB Local  (via Docker container) is available as an early access client preview.  I hope you will test this new technology and provide us valuable feedback. Learn more, then request access:

By dynamically assigning workloads to IT resources based on a variety of factors, including the characteristics of specific applications, the best-available resources, and service-level policies, a software-defined environment can deliver continuous, dynamic optimization and reconfiguration to address infrastructure issues.

Software-defined environment benefits

A software defined environment framework can help to:

  • Simplify operations with automated infrastructure tuning and configuration
  • Reduce time to value with a simple, pluggable and rich API-supported architectures
  • Sense and respond to workload demands automatically
  • Optimize resources by assigning assets without manual intervention
  • Maintain security and manage privacy through a common platform
  • Facilitate better business outcomes through advanced analytics and cognitive capabilities

A software-defined environment fits well into the private cloud ecosystem so that IT staff can deliver flexibility and ease of consumption, as well as maximize the use of commodity or virtualized hardware. An SDE is now easily achievable by leveraging container technology, where Docker is one of the leaders.

Docker containers provide application portability

Docker containers “wrap up” a piece of software in a complete file system that contains everything the software needs to run: code, run-times, system tools, system libraries and other components that can be installed on a server. This guarantees that the software will always run the same, regardless of the environment in which it is running.

Docker provides true application portability and ease of consumption by alleviating the complex process of software setup and installation that often can require multiple skills across multiple hours or days. It provides OS-level abstraction without disrupting the standards on the host operating system, which makes it even more attractive.

One key point to keep in mind is that Docker is not the same as VMware. Docker provides process isolation at the operating system level, whereas VMware provides a hardware abstraction layer. Unlike VMware, Docker does not create an entire virtual operating system. Instead, the host operating system kernel can be shared across multiple Docker containers. This makes it very lightweight to deploy and faster to start than a virtual machine.  There is no looking back, as container technology is being very quickly embraced as part of a hybrid solution that meets business user needs-fast!

dashDB Local: data warehousing delivered via Docker container

Coming full circle, the data warehouse is the foundation of all analytics and must be fast and agile to serve new analytics needs.  Software defined environments make this easy to do – enabling key deployment of the warehousing engine in minutes as compared to hours or days.

IBM dashDB is the data warehousing technology that delivers high speed insights through in-memory computing and  in-database analytics at massively parallel processing (MPP) scale.  It has been available as a fully managed services on the IBM cloud.  Now, dashDB Local  as a is available as an early access client preview for private clouds and other software-defined infrastructures.  I hope you will test this new technology and provide us valuable feedback. Learn more, then request access:

About Mitesh,

MiteshMitesh Shah is the product manager for the new dashDB data warehousing solution as a software-defined environment (SDE) that can be used on private clouds and other implementations that support Docker container technology. He has broad experience around various facets of software development revolving around database and data warehousing technologies.  Throughout his career, Mitesh has enjoyed a focus on helping clients address their data management and solution architecture needs.

Fluid doesn’t just describe your coffee anymore … Introducing IBM Fluid Query 1.0

by Wendy Lucas

Having grown up in the world of data and analytics, I long for the days when our goal was to create a single version of the truth. Remember  when data architecture diagrams showed source systems flowing through ETL, into a centralized data warehouse and then out to business intelligence applications? Wow, that was nice and simple, right – at least conceptually? As a consultant, I can still remember advising clients and helping them to pictorially represent this reference architecture. It was a pretty simple picture, but that was also a long time ago.

While IT organizations struggled with data integration, enterprise data models and producing the single source of the truth, the lines of business grew impatient and would build their own data marts (or data silos).  We can think of this as the first signs of the requirement for user self-service. The goal behind building the consolidated, enterprise, single version of the truth never went away. Sure, we still want the ability to drive more accurate decision-making, deliver consistent reporting, meet regulatory requirements, etc. However, the ability to achieve this goal became very difficult as requirements for user self-service, increased agility, new data types, lower cost solutions, better business insight and faster time to value became more important.

Recognizing the Logical Data Warehouse

Enterprises have developed collections of data assets that each provide value for specific workloads and purposes. This includes data warehouses, data marts, operational data stores and Hadoop data stores to name a few. It is really this collection of data assets that now serves as the foundation for driving analytics, fulfilling the purpose of the data warehouse within the architecture. The Logical Data Warehouse or LDW is a term we use to describe the collection of data assets that make up the data warehouse environment, recognizing that the data warehouse is no longer just a single entity. Each data store within the Logical Data Warehouse can be built on a different platform, fit for the purpose of the workload and analytic requirements it serves.

Each data store within the Logical Data Warehouse can be built on a different platform, fit for the purpose of the workload and analytic requirements it serves.

But doesn’t this go against the single version of the truth? The LDW will still struggle to deliver on the goal behind the single version of the truth, if it doesn’t have information governance, common metadata and data integration practices in place. This is a key concept. If you’re interested in more on this topic, check out a recent webcast by some of my colleagues on the “Five Pitfalls to Avoid in Your Data Warehouse Modernization Project: Making Data Work for You.”

Unifying data across the Logical Data Warehouse

Logically grouping separate data stores into the LDW does not necessarily make our lives easier. Assuming you have followed good information governance practices, you still have data stores in different places, perhaps on different platforms. Haven’t you just made your application developers and users lives, who want self-service, infinitely more difficult? Users need the ability to leverage data across these various data stores without having to worry about the complexity of where to find it, or re-writing their applications. And let’s not forget about the needs of IT. DBAs struggle to manage capacity and performance on data warehouses while listening to Hadoop administrators brag about the seemingly endless, lower cost storage and ability to manage new data types that they can provide. What if we could have the best of all worlds? Provide seamless access to data across a variety of stores, formats, and platforms. Provide capability for IT to manage Hadoop and Data Warehouses along-side each other in a way that leverages the strengths of both.

Introducing IBM Fluid Query

IBM Fluid Query is the capability to unify data across the Logical Data Warehouse, providing the ability to seamlessly access data in it’s various forms and locations. No matter where a user connects within the logical data warehouse, users have access to all data through the same, standard API/SQL/Analytics access. IBM Fluid Query powers the Logical Data Warehouse, giving users the ability to combine numerous types of data from various sources in a fast and agile manner to drive analytics and deeper insight, without worrying about connecting to multiple data stores, using different syntaxes or API’s or changing their application.

In its first release, IBM Fluid Query 1.0 will provide users of the IBM PureData System for Analytics the capability to access Hadoop data from their data warehouse and move data between Hadoop and PureData if needed. High performance is about moving the query to the data, not the data to the query. This provides extreme value to PureData users who want the ability to merge data from their structured data warehouse with Hadoop for powerful analytic combinations, or more in-depth analysis. IBM Fluid Query 1.0 is part of a toolkit within Netezza Platform Software (NPS) on the appliance so it’s free for all PureData System for Analytics customers.

IBM Fluid Query 1.0 will provide users of the IBM PureData System for Analytics the capability to access Hadoop data from their data warehouse and move data between Hadoop and PureData

For Hadoop users, IBM also provides IBM Big SQL which delivers Fluid Query capability. Big SQL provides the ability to run queries on a variety of data stores, including PureData System for Analytics, DB2 and many others from your IBM BigInsights Hadoop environment. Big SQL has the ability to push the query to the data store and return the result to Hadoop without moving all the data across the network. Other Hadoop vendors provide the ability to write queries like this but they move all the data back to Hadoop before filtering, applying predicates, joining, etc. In the world of big data, can you really afford to move lots of data around to meet the queries that need it?

IBM Fluid Query 1.0 is generally available on March 27 as a software addition to PureData System for Analytics customers. If you are an existing customer and want to understand how to take advantage of IBM Fluid Query 1.0 or if you just would like more information, I encourage you to listen to this on-demand webcast: Virtual Enzee – The Logical Data Warehouse, Hadoop and PureData System for Analytics  and check out the solution brief. Or if you are an existing PureData System for Analytics customer, download this software. Update: Learn about Fluid Query 1.5, announced July, 2015.

About Wendy,

Wendy LucasWendy Lucas is a Program Director for IBM Data Warehouse Marketing. Wendy has over 20 years of experience in data warehousing and business intelligence solutions, including 12 years at IBM. She has helped clients in a variety of roles, including application development, management consulting, project management, technical sales management and marketing. Wendy holds a Bachelor of Science in Computer Science from Capital University and you can follow her on Twitter at @wlucas001

Twitter Chat Announcement : Demystifying the Data Refinery on Oct 22nd from 1:00 pm – 2:00 pm ET

Organizations are storing large volumes of data in the hope of leveraging it for advanced analytics. Architectures that include data lakes, data reservoirs and data hubs are designed to help organizations manage growing volume, variety and velocity of data, and to make the data available for analysis by everyone in the organization. While a data lake can provide value, it alone isn’t sufficient for managing and analyzing disparate sources of data.

What are the additional capabilities needed to enable these centralized pools of data to deliver value? Is there a way to provide self-service access to everyone within the organization in a sustainable manner? What are the gaps that need to be addressed?  Join us in #makedatawork twitter chat to get answers for some of these questions and more.

Our special guests for the chat are R Ray Wang (@rwang0), Principal Analyst, Founder, and Chairman of Constellation Research, Inc.; David Corrigan (@dcorrigan), Director of Product Marketing for IBM InfoSphere; Paula Wiles Sigmon (@paulawilesigmon), Program Director of  Product Marketing for IBM InfoSphere; and James Kobielus (@jameskobielus), IBM big data evangelist, speaker and writer. Twitter handle @IBM_InfoSphere will be moderating the chat.

You can follow along and join the discussion using the hashtag #makedatawork. Here are the questions we’ll be discussing, as well as reference articles to help inspire the conversation on Wednesday, October 22, 1:00 p.m. ET.

#makedatawork chat questions

  1. Can the new paradigms for storing data – data lakes, data reservoirs etc., replace traditional platforms for managing data from disparate sources?
  2. How is a data refinery different from a data reservoir or a data lake? Is it a marketing gimmick?
  3. Where does a data refinery fit into an existing enterprise information architecture?
  4. What are the critical capabilities for a data refinery?
  5. What are the additional capabilities and services needed to make data in a data lake clean, relevant, and accessible to all?
  6. As we evolve toward self-service data refinement for the business user, what will be the role of IT?
  7. What advice do you have for people and organizations trying to streamline access to clean, relevant data for business users?

Looking forward to your participation!


Team IBM Data Warehousing