Why Are Customers Architecting Hybrid Data Warehouses?

By Mona Patel

As a leader in IT, you may be  incented or mandated to explore cloud and big data solutions to transform rigid data warehousing environments into agile ones to match how the business really wants to operate.  The following questions must come to mind:

  • How do I integrate new analytic capabilities and data sets to my current on-premises data warehouse environment?
  • How do I deliver self service solutions to accelerate the analytic process?
  • How do I leverage commodity hardware to lower costs?

For these questions, and more, organizations are architecting hybrid data warehouses.  In fact, these organizations moving towards hybrid are referred to as ‘Best In Class’ according to The Aberdeen Group’s latest research: “Best In Class focus on hybridity, both in their data infrastructure and with their analytical tools as well.  Given the substantial investments companies have made in their IT environment, a hybrid approach allows them to utilize these investments to the best of their ability while explore more flexible and scalable cloud-based solutions as well.”  To hear more about these ‘Best In Class’ organizations, watch the 45 minute webcast.

How do you get to this hybrid data warehouse architecture with the least risk and most reward?  IBM dashDB delivers the most flexible, cloud database services to extend and integrate with your current analytics and data warehouse environment, addressing all the challenges related to leveraging new sources of customer, product, and operational insights to build new applications, products, and business models.

To help our clients evaluate hybrid data warehouse solutions, Harvard Research Group (HRG) provides an assessment of IBM dashDB.  In this paper, HRG highlights product functionality, as well as 3 uses cases in Healthcare, Oil and Gas, and Financial Services.   Security, Performance, High Availability, In-Database Analytics, and more are covered in the paper to ensure future architecture enhancements optimize IT rather than adding new skills, complexities, and integration costs. After reading this paper, you will find that dashDB enables IT to respond rapidly to the needs of the business, keep systems running smoothly, and achieve faster ROI.

To know more on dashDB check out the video below:

 

About Mona,

mona_headshotMona Patel is currently the Portfolio Marketing Manager for IBM dashDB, the future of data warehousing.  With over 20 years of analyzing data at The Department of Water and Power, Air Touch Communications, Oracle, and MicroStrategy, Mona decided to grow her career at IBM, a leader in data warehousing and analytics.  Mona received her Bachelor of Science degree in Electrical Engineering from UCLA.

Advertisements

Using Docker containers for software-defined environments or private cloud implementations

by Mitesh Shah

Data warehousing architectures have evolved considerably over recent years. As businesses try to derive insight as the basis of value creation, ALL roles must participate by leveraging new insights.  As a result, analytics needs are expanding, markets are transforming and new business models are being created.  This ushers in increased requirements for self-service analytics and alternative infrastructure solutions. Read on to learn how the “software-defined environment” (SDE) that utilizes container technology can help you meet expanded analytics needs.

Adaptability delivered through software-defined environments

From an avalanche of new data, to mobile computing and cloud-based platforms, new technologies must move into the IT infrastructure very quickly. Traditional IT systems—hampered by labor-intensive management and high costs—are struggling to keep up. IT organizations are caught between complex security requirements, extreme data volumes and the need for rapid deployment of new services. A simpler, more adaptive and more responsive IT infrastructure is required.

One of the key solutions on the horizon is  the SDE which optimizes the entire computing infrastructure – compute, storage and network resources – so that IT staff can adapt to different types of workloads very quickly. For example, without an SDE, resources are assigned manually to workloads; the same assignments happens automatically within an SDE.

Now, dashDB Local  (via Docker container) is available as an early access client preview.  I hope you will test this new technology and provide us valuable feedback. Learn more, then request access: ibm.biz/dashDBLocal

By dynamically assigning workloads to IT resources based on a variety of factors, including the characteristics of specific applications, the best-available resources, and service-level policies, a software-defined environment can deliver continuous, dynamic optimization and reconfiguration to address infrastructure issues.

Software-defined environment benefits

A software defined environment framework can help to:

  • Simplify operations with automated infrastructure tuning and configuration
  • Reduce time to value with a simple, pluggable and rich API-supported architectures
  • Sense and respond to workload demands automatically
  • Optimize resources by assigning assets without manual intervention
  • Maintain security and manage privacy through a common platform
  • Facilitate better business outcomes through advanced analytics and cognitive capabilities

A software-defined environment fits well into the private cloud ecosystem so that IT staff can deliver flexibility and ease of consumption, as well as maximize the use of commodity or virtualized hardware. An SDE is now easily achievable by leveraging container technology, where Docker is one of the leaders.

Docker containers provide application portability

Docker containers “wrap up” a piece of software in a complete file system that contains everything the software needs to run: code, run-times, system tools, system libraries and other components that can be installed on a server. This guarantees that the software will always run the same, regardless of the environment in which it is running.

Docker provides true application portability and ease of consumption by alleviating the complex process of software setup and installation that often can require multiple skills across multiple hours or days. It provides OS-level abstraction without disrupting the standards on the host operating system, which makes it even more attractive.

One key point to keep in mind is that Docker is not the same as VMware. Docker provides process isolation at the operating system level, whereas VMware provides a hardware abstraction layer. Unlike VMware, Docker does not create an entire virtual operating system. Instead, the host operating system kernel can be shared across multiple Docker containers. This makes it very lightweight to deploy and faster to start than a virtual machine.  There is no looking back, as container technology is being very quickly embraced as part of a hybrid solution that meets business user needs-fast!

dashDB Local: data warehousing delivered via Docker container

Coming full circle, the data warehouse is the foundation of all analytics and must be fast and agile to serve new analytics needs.  Software defined environments make this easy to do – enabling key deployment of the warehousing engine in minutes as compared to hours or days.

IBM dashDB is the data warehousing technology that delivers high speed insights through in-memory computing and  in-database analytics at massively parallel processing (MPP) scale.  It has been available as a fully managed services on the IBM cloud.  Now, dashDB Local  as a is available as an early access client preview for private clouds and other software-defined infrastructures.  I hope you will test this new technology and provide us valuable feedback. Learn more, then request access: ibm.biz/dashDBLocal

About Mitesh,

MiteshMitesh Shah is the product manager for the new dashDB data warehousing solution as a software-defined environment (SDE) that can be used on private clouds and other implementations that support Docker container technology. He has broad experience around various facets of software development revolving around database and data warehousing technologies.  Throughout his career, Mitesh has enjoyed a focus on helping clients address their data management and solution architecture needs.

What’s new: IBM Fluid Query 1.6

by Doug Dailey

Editorial Note: IBM Fluid Query 1.7 became available in May, 2016. You can read about features in release 1.6 here, but we also recommend reading the release 1.7 blog here.

The IBM PureData Systems for Analytics team has assembled a value-add set of enhancements over current software versions of Netezza Platform Software (NPS), INZA software and Fluid Query. We have enhanced  integration, security, real-time analytics for System z and usability features with our latest software suite arriving on Fix Central today.

There will be something here for everyone, whether you are looking to integrate your PureData System (Netezza) into a Logical Data Warehouse, improve security, gain more leverage with DB2 Analytics Accelerator for z/OS, or simply improve your day-to-day experience. This post covers the IBM Fluid Query 1.6 technology.  Refer to my NPS and INZA post (link) for more information on the enhancements that are now available in these other areas.

Integrating with the Logical Data Warehouse: Fluid Query overview

Are you struggling with building out your data reservoir, lake or lagoon? Feeling stuck in a swamp? Or, are you surfing effortlessly through an organized Logical Data Warehouse (LDW)?

Fluid Query offers a nice baseline of capability to get your PureData footprint plugged into your broader data environment or tethered directly to your IBM BigInsights Apache Hadoop distribution. Opening access across your broader ecosystem of on-premise, cloud, commodity hardware and Hadoop platforms gets you ever closer to capturing value throughout “systems of engagement” and “systems of record” so you can reveal new insights across the enterprise.

Now is the time to be fluid in your business, whether it is ease of data integration, access to key data for discovery/exploration, monetizing data, or sizing fit-for-purpose stores for different data types.  IBM Fluid Query opens these conversations and offers some valuable flexibility to connect the PureData System with other PureData Systems, Hadoop, DB2, Oracle and virtually any structured data source that supports JDBC drivers.

The value of content and the ability to tap into new insights is a must have to compete in any market. Fluid Query allows you to provision data for better use by application developers, data scientists and business users. We provide the tools to build the capability to enable any user group.

fluid query connectors

What’s new in Fluid Query 1.6?

Fluid Query was released this year and is in its third “agile” release of the year. As part of NPS software, it is available at no charge to existing PureData clients, and you will find information on how to access Fluid Query 1.6 below.

This capability enables you to query more data for deeper analytics from PureData. For example, you can query data in the PureData System together with:

  • Data in IBM BigInsights or other Hadoop implementations
  • Relational data stores (DB2, 3rd party and open source databases like Postgres, MySQL, etc.)
  • Multi-generational PureData Systems for Analytics systems (“Twin Fin”, “Striper”, “Mako”)

The following is a summary of some new features in the release that all help to support your needs for insights across a range of data types and stores:

  • Generic connector for access to structured data stores that support JDBC
    This generic connector enables you to select the database of choice. Database servers and engines like Teradata, SQL Server, Informix, MemSQL and MAPR can now be tapped for insight. We’ve also provided a capability to handle any data type mismatches between differing source/target systems.
  • Support for compressed read from Big SQL on IBM BigInsights
    Now using the Big SQL capability in IBM BigInsights, you are able to read compressed data in Hadoop file systems such as Big Insights, Cloudera and Hortonworks. This adds increased flexibility and efficiency in storage, data protection and access.
  • Ability to import databases to Hadoop and append to tables in Hadoop
    New capabilities now enable you to import databases to Hadoop, as well as append data in existing tables in Hadoop. One use case for this is backing up historical data to a queryable archive to help manage capacity on the data warehouse. This may include incremental backups, for example from a specific date for speed and efficiency.
  • Support for the lastest Hadoop distributions
    Fluid Query v. 1.6 now supports the latest Hadoop distributions, including BigInsights 4.1, Hortonworks 2.5 and Cloudera 5.4.5. For Netezza software, support is now available for NPS 7.2.1 and INZA 3.2.1.

Fluid Query 1.6 can be easily downloaded from IBM Support Fix Central. I encourage you to refer to my “Getting Started” post that was written for Fluid Query 1.5 for additional tips and instructions. Note that this link is for existing PureData clients. Refer to the section below if you are not a current client.

fluid query download from fix central

Packaging and distribution

From a packaging perspective we refreshed IBM Netezza Platform Developer Software to this latest NPS 7.2.1 release to ensure the software suite is current from IBM’s Passport Advantage.

Supported Appliances Supported Software
  • N3001
  • N2002
  • N2001
  • N100x
  • C1000
  • Netezza Platform Software v7.2.1
  • Netezza Client Kits v7.2.1
  • Netezza SQL Extension Toolkit v7.2.1
  • Netezza Analytics v3.2.1
  • IBM Fluid Query v1.6
  • Netezza Performance Portal v2.1.1
  • IBM Netezza Platform Development Software v7.2.1

For the Netezza Developer Network we continue to expand the ability to easily pick up and work with non-warranted products for basic evaluation by refreshing the Netezza Emulator to NPS 7.2.1 with INZA 3.2.1. You will find a refresh of our non-warranted version of Fluid Query 1.6 and the complete set of Client Kits that support NPS 7.2.1.

NDN download button

Feel free to download and play with these as a prelude to PureData Systems for Analytics purchase or as a quick way to validate new software functionality with your application. We maintain our commitment to helping our partners working with our systems by maintaining the latest systems and software for you to access. Bring your application or solution and work to certify, qualify and validate them.

For more information,  NPS 7.2.1 and INZA 3.2.1 software, refer to my post.

Doug Daily About Doug,
Doug has over 20 years combined technical & management experience in the software industry with emphasis in customer service and more recently product management.He is currently part of a highly motivated product management team that is both inspired by and passionate about the IBM PureData System for Analytics product portfolio.

IBM Fluid Query: Extending Insights Across More Data Stores

by Rich Hughes

Since its announcement in March, 2015, IBM Fluid Query has opened the door to better business insights for IBM PureData System for Analytics clients. Our clients have wanted and needed accessibility across a wide variety of data stores including Apache Hadoop with its unstructured stores, which is one of the key reasons for the massive growth in data volumes. There is also valuable data in other types of stores including relational databases that are often “systems of record” and “systems of insight”. Plus, Apache Spark is entering the picture as an up-and-coming engine for real-time analytics and machine learning.

IBM is pleased to announce IBM Fluid Query 1.5 to provide seamless integration with these additional data stores—making it even easier to get deeper insights from even more data.

IBM Fluid Query 1.5 – What is it?

IBM Fluid Query 1.5 provides access to data in other data stores from IBM PureData System for Analytics appliances. Starting with Fluid Query 1.0, users were able to query and quickly move data between Hadoop and IBM PureData System for Analytics appliances. This capability covered IBM BigInsights for ApacheHadoop, Cloudera, and Hortonworks.

Now with Fluid Query 1.5, we add the ability to reach into even more data stores including Spark and such popular relational database management systems as:

  • DB2 for Linux, UNIX and Windows
  • dashDB
  • PureData System for Operational Analytics
  • Oracle Database
  • Other PureData System for Analytics implementations

Fluid Query is able to direct queries from PureData System for Analytics database tables to all of these different data sources and get just the results back—thus creating a powerful analytic capability.

IBM Fluid Query Benefits

IBM Fluid Query offers two key benefits. First, it makes practical use of data stores and lets users access them with their existing SQL skills. Workbench tools yield productivity gains as SQL remains the query language of choice when PureData System for Analytics and Hadoop schemas logically merge. IBM Fluid Query provides the physical bridge over which a query is pushed efficiently to where the data needed for that query resides—whether in the same data warehouse, another data warehouse a relational or transactional database, Hadoop or Spark.

Second, IBM Fluid Query enables archiving and capacity management on PureData-based data warehouses. With Fluid Query, users gain:

  • better exploitation of Hadoop as a “Day 0” archive that is queryable with conventional SQL;
  • capabilities to make use of data in a Spark in-memory analytics engine
  • the ability to easily combine hot data from PureData with colder data from Hadoop;
  • data warehouse resource management with the ability to archive colder data from PureData to Hadoop to relieve resources on the data warehouse.

Managing your share of Big Data Growth

The design point for Fluid Query is that the query is moved to the data instead of bringing massive data volumes to the query. This is a best-of-breed approach where tasks are performed on the platform best suited for that workload.

For example, use the PureData System for Analytics data warehouse for production quality analytics where performance is critical to the success of your business, while simultaneously using Hadoop or Spark to discover the inherent value of those full-volume data sources. Or, create powerful analytic combinations across data in other operational systems or analytics warehouses with PureData stores without having to move and integrate data before analyzing it.

IBM Fluid Query 1.5 is now generally available as a software addition to PureData System for Analytics clients. If you want to understand how to take advantage of IBM® Fluid Query 1.5, check out these resources:

About Rich,

Rich HughesRich Hughes is an IBM Marketing Program Manager for Data Warehousing.  Hughes has worked in a variety of Information Technology, Data Warehousing, and Big Data jobs, and has been with IBM since 2004.  Hughes earned a Bachelor’s degree from Kansas University, and a Master’s degree in Computer Science from Kansas State University.  Writing about the original Dream Team, Hughes authored a book on the 1936 US Olympic basketball team, a squad composed of oil refinery laborers and film industry stage hands. You can follow him on Twitter: @rhughes134

Making faster decisions at the point of engagement with IBM PureData System for Operational Analytics

by Rahul Agarwal

The need for operational analytics
Today, businesses across the world face challenges dealing with the increasing cost and complexity of IT, as they cope with the growing volume, velocity and diversity of information. However, organizations realize that they must capitalize on this information through the smart use of analytics to meet emerging challenges and uncover new business opportunities.

… analytics needs to change from a predominantly back-office activity for a handful of experts to something that can provide pervasive, predictive, near-real-time information for front-line decision makers.

One thing that is increasingly becoming clear is that analytics is most valuable when it empowers individuals throughout the organization. Therefore, analytics needs to change from a pre-dominantly back-office activity for a handful of experts to something that can provide pervasive, predictive, near-real-time information for front-line decision makers.

Low latency analytics on transactional data, or operational analytics, provide actionable insight at point of engagement, giving organizations the opportunity to deliver impactful and engaging services faster than their competition. So what should one look for in an operational analytics system?

Technical capabilities
A high percentage of queries to ‘operational analytics’ systems—often up to 80% — are interactive lookups that are focused on data about a specific customer, account or patient. To deliver the correct information as rapidly as possible, systems must be optimized for the right balance of analytics performance and operational query throughput.

… systems must be optimized for the right balance of analytics performance and operational query throughput.

IT requirements
In order to maximize the benefits of operational analytics, one needs a solution that will quickly deliver value, better performance, scale and efficiency – while reducing the need for IT experts who design, integrate and maintain IT systems. In addition, one should look for a system, which comes with deep levels of optimization to achieve the desired scale, performance, and service quality, since assembling the right skills to optimize these systems is a costly and often difficult endeavour.

Flexibility
The ideal system should provide analytic capabilities to deliver rapid and compelling return on investment now; and this system must grow to meet new demands so that it remains as relevant and powerful in the future as it is today. In addition, the system should have the flexibility to meet these demands without disrupting the free-flow of decision support intelligence to the individuals and applications driving the business.

IBM PureData System for Operational Analytics
The IBM PureData System for Operational Analytics helps organizations meet these complex requirements with an expert integrated data system that is designed and optimized specifically for the demands of an operational analytics workload.
Built on IBM POWER Systems servers with IBM System Storage and powered by IBM DB2 software, the system is a complete solution for operational analytics that provides both the simplicity of an appliance and the flexibility of a custom solution. The system has recently been refreshed with latest technology that will help customers to make faster, fact-based decisions ¬and now offers:

  • Accelerated performance with the help of new, more powerful servers that leverage POWER8 technology and improved tiered storage which uses spinning disks for ‘cool’ data and IBM FlashSystemTM storage for the ‘hot’ or frequently accessed data.
  • Enhanced scalability that allows the system to grow to peta-scale capacity. In addition, nodes of the refreshed system can be added to previous generation of PureData System for Operational Analytics thus providing better protection for your technology investment.
  • A reduced data center footprint as a result of increased hardware density.

So explore the benefits and use cases of PureData System for Operational Analytics by visiting our website, ibm.com/software/data/puredata/operationalanalytics as well as connecting with IBM experts.

About Rahul Agarwal

Rahul AgarwalRahul Agarwal is a member of the worldwide product marketing team at IBM that focuses on data warehouse and database technology. Rahul has held a variety of business management, product marketing, and other roles in other companies including HCL Technologies and HP before joining IBM.  Rahul studied at the Indian Institute of Management, Kozhikode and holds a bachelor of engineering (electronics) degree from the University of Pune, India. Rahul’s Twitter handle :  @rahulag80


 

IBM Fluid Query 1.0: Efficiently Connecting Users to Data

by Rich Hughes

Launched on March 27th, IBM Fluid Query 1.0 opens doors of “insight opportunity” for IBM PureData System for Analytics clients. In the evolving data ecosystem, users want and need accessibility to a variety of data stores in different locations. This only makes sense, as newer technologies like Apache Hadoop have broadened analytic possibilities to include unstructured data. Hadoop is the data source that accounts for most of the increase in data volume.  By observation, the world’s data is doubling about every 18 months, with some estimates putting the 2020 data volume at 40 zettabytes, or 4021 bytes. This increase by decade’s end would represent a 20 fold growth over the 2011 world data total of 1.821 bytes.1 IT professionals as well as the general public can intuitively feel the weight and rapidity of data’s prominence in our daily lives. But how can we cope with, and not be overrun by, relentless data growth? The answer lies in part, with better data access paths.


IBM Fluid Query 1.0 opens doors of “insight opportunity”for IBM PureData System for Analytics clients. In the evolving data ecosystem, users want and need accessibility to a variety of data stores in different locations.

IBM Fluid Query 1.0 – What is it?

IBM Fluid Query 1.0 is a specific software feature in PureData that provides access to data in Hadoop from PureData appliances. Fluid Query also promotes the fast movement of data between Big Data ecosystems and PureData warehouses.  Enabling query and data movement, this new technology connects PureData appliances with common Hadoop systems: IBM BigInsights, Cloudera, and Hortonworks. Fluid Query allows results from PureData database tables and Hadoop data sources to be merged, thus creating powerful analytic combinations.


Fluid Query allows results from PureData System for Analytics database tables and Hadoop data sources to be merged, thus creating powerful analytic combinations.

IBM® Fluid Query Benefits

Fluid Query makes practical use of existing SQL developer skills. Workbench tools yield productivity gains because SQL remains the query language of choice when PureData and Hadoop schemas logically merge. Fluid Query is the physical bridge whereby a query is pushed efficiently to where the data resides, whether it is in your data warehouse or in your Hadoop environment. Other benefits made possible by Fluid Query include:

  • better exploitation of Hadoop as a “Day 0” archive, that is queryable with conventional SQL;
  • combining hot data from PureData with colder data from Hadoop; and
  • archiving colder data from PureData to Hadoop to relieve resources on the data warehouse.

Managing your share of Big Data Growth

Fluid Query provides data access between Hadoop and PureData appliances. Your current data warehouse, the PureData System for Analytics, can be extended in several important ways over this bridge to additional Hadoop capabilities. The coexistence of PureData appliances alongside Hadoop’s beneficial features is a best-of-breed approach where tasks are performed on the platform best suited for that workload. Use the PureData warehouse for production quality analytics where performance is critical to the success of your business, while simultaneously using Hadoop to discover the inherent value of full-volume data sources.

How does Fluid Query differ from IBM BigSQL technology?

Just as IBM PureData System for Analytics innovated by moving analytics to the data, IBM Big SQL moves queries to the correct data store. IBM Big SQL supports query federation to many data sources, including (but not limited to) IBM PureData System for Analytics; DB2 for Linux, UNIX and Windows database software; IBM PureData System for Operational Analytics; dashDB, Teradata, and Oracle. This allows users to send distributed requests to multiple data sources within a single SQL statement. IBM Big SQL is a feature included with IBM BigInsights for Apache Hadoop which is an included software entitlement with IBM PureData System for Analytics. By contrast, many Hadoop and database vendors rely on significant data movement just to resolve query requests—a practice that can be time consuming and inefficient.

Learn more

Since March 27, 2015, IBM® Fluid Query 1.0 has been generally available as a software addition to PureData System for Analytics customers. If you want to understand how to take advantage of IBM® Fluid Query 1.0 check out these two sources: the on-demand webcast, Virtual Enzee – The Logical Data Warehouse, Hadoop and PureData System for Analytics , and the IBM Fluid Query solution brief. Update: Learn about Fluid Query 1.5, announced July, 2015.

About Rich,

Rich HughesRich Hughes is an IBM Marketing Program Manager for Data Warehousing.  Hughes has worked in a variety of Information Technology, Data Warehousing, and Big Data jobs, and has been with IBM since 2004.  Hughes earned a Bachelor’s degree from Kansas University, and a Master’s degree in Computer Science from Kansas State University.  Writing about the original Dream Team, Hughes authored a book on the 1936 US Olympic basketball team, a squad composed of oil refinery laborers and film industry stage hands. You can follow him on Twitter: @rhughes134

Footnote:
1 “How Much Data is Out There” by Webopedia Staff, Webopedia.com, March 3, 2014.

Fluid doesn’t just describe your coffee anymore … Introducing IBM Fluid Query 1.0

by Wendy Lucas

Having grown up in the world of data and analytics, I long for the days when our goal was to create a single version of the truth. Remember  when data architecture diagrams showed source systems flowing through ETL, into a centralized data warehouse and then out to business intelligence applications? Wow, that was nice and simple, right – at least conceptually? As a consultant, I can still remember advising clients and helping them to pictorially represent this reference architecture. It was a pretty simple picture, but that was also a long time ago.

While IT organizations struggled with data integration, enterprise data models and producing the single source of the truth, the lines of business grew impatient and would build their own data marts (or data silos).  We can think of this as the first signs of the requirement for user self-service. The goal behind building the consolidated, enterprise, single version of the truth never went away. Sure, we still want the ability to drive more accurate decision-making, deliver consistent reporting, meet regulatory requirements, etc. However, the ability to achieve this goal became very difficult as requirements for user self-service, increased agility, new data types, lower cost solutions, better business insight and faster time to value became more important.

Recognizing the Logical Data Warehouse

Enterprises have developed collections of data assets that each provide value for specific workloads and purposes. This includes data warehouses, data marts, operational data stores and Hadoop data stores to name a few. It is really this collection of data assets that now serves as the foundation for driving analytics, fulfilling the purpose of the data warehouse within the architecture. The Logical Data Warehouse or LDW is a term we use to describe the collection of data assets that make up the data warehouse environment, recognizing that the data warehouse is no longer just a single entity. Each data store within the Logical Data Warehouse can be built on a different platform, fit for the purpose of the workload and analytic requirements it serves.


Each data store within the Logical Data Warehouse can be built on a different platform, fit for the purpose of the workload and analytic requirements it serves.

But doesn’t this go against the single version of the truth? The LDW will still struggle to deliver on the goal behind the single version of the truth, if it doesn’t have information governance, common metadata and data integration practices in place. This is a key concept. If you’re interested in more on this topic, check out a recent webcast by some of my colleagues on the “Five Pitfalls to Avoid in Your Data Warehouse Modernization Project: Making Data Work for You.”

Unifying data across the Logical Data Warehouse

Logically grouping separate data stores into the LDW does not necessarily make our lives easier. Assuming you have followed good information governance practices, you still have data stores in different places, perhaps on different platforms. Haven’t you just made your application developers and users lives, who want self-service, infinitely more difficult? Users need the ability to leverage data across these various data stores without having to worry about the complexity of where to find it, or re-writing their applications. And let’s not forget about the needs of IT. DBAs struggle to manage capacity and performance on data warehouses while listening to Hadoop administrators brag about the seemingly endless, lower cost storage and ability to manage new data types that they can provide. What if we could have the best of all worlds? Provide seamless access to data across a variety of stores, formats, and platforms. Provide capability for IT to manage Hadoop and Data Warehouses along-side each other in a way that leverages the strengths of both.

Introducing IBM Fluid Query

IBM Fluid Query is the capability to unify data across the Logical Data Warehouse, providing the ability to seamlessly access data in it’s various forms and locations. No matter where a user connects within the logical data warehouse, users have access to all data through the same, standard API/SQL/Analytics access. IBM Fluid Query powers the Logical Data Warehouse, giving users the ability to combine numerous types of data from various sources in a fast and agile manner to drive analytics and deeper insight, without worrying about connecting to multiple data stores, using different syntaxes or API’s or changing their application.

In its first release, IBM Fluid Query 1.0 will provide users of the IBM PureData System for Analytics the capability to access Hadoop data from their data warehouse and move data between Hadoop and PureData if needed. High performance is about moving the query to the data, not the data to the query. This provides extreme value to PureData users who want the ability to merge data from their structured data warehouse with Hadoop for powerful analytic combinations, or more in-depth analysis. IBM Fluid Query 1.0 is part of a toolkit within Netezza Platform Software (NPS) on the appliance so it’s free for all PureData System for Analytics customers.


IBM Fluid Query 1.0 will provide users of the IBM PureData System for Analytics the capability to access Hadoop data from their data warehouse and move data between Hadoop and PureData

For Hadoop users, IBM also provides IBM Big SQL which delivers Fluid Query capability. Big SQL provides the ability to run queries on a variety of data stores, including PureData System for Analytics, DB2 and many others from your IBM BigInsights Hadoop environment. Big SQL has the ability to push the query to the data store and return the result to Hadoop without moving all the data across the network. Other Hadoop vendors provide the ability to write queries like this but they move all the data back to Hadoop before filtering, applying predicates, joining, etc. In the world of big data, can you really afford to move lots of data around to meet the queries that need it?

IBM Fluid Query 1.0 is generally available on March 27 as a software addition to PureData System for Analytics customers. If you are an existing customer and want to understand how to take advantage of IBM Fluid Query 1.0 or if you just would like more information, I encourage you to listen to this on-demand webcast: Virtual Enzee – The Logical Data Warehouse, Hadoop and PureData System for Analytics  and check out the solution brief. Or if you are an existing PureData System for Analytics customer, download this software. Update: Learn about Fluid Query 1.5, announced July, 2015.

About Wendy,

Wendy LucasWendy Lucas is a Program Director for IBM Data Warehouse Marketing. Wendy has over 20 years of experience in data warehousing and business intelligence solutions, including 12 years at IBM. She has helped clients in a variety of roles, including application development, management consulting, project management, technical sales management and marketing. Wendy holds a Bachelor of Science in Computer Science from Capital University and you can follow her on Twitter at @wlucas001