Turn Up The Power For Software-Defined Data Warehousing

by Mona Patel

Interview with Mukta Singh

As big data analytics technologies such as Spark and Hadoop continue their move into the mainstream, you might think that the traditional data warehouse is becoming less important.

Actually, nothing could be further from the truth.

To enable data of all types to be ingested, transformed, processed and analyzed efficiently, many companies are choosing to build hybrid analytics architectures that plug cloud and open source technologies such as Spark and Hadoop into on-premises environments. At the heart of these hybrid architectures lies the data warehouse – a highly reliable resource that provides a single source of truth for enterprise reporting and analytics.

This raises an important question: since the data warehouse is so central to the hybrid analytics architecture, how can we make sure it performs well and cost-effectively?

Traditional wisdom is that the infrastructure doesn’t matter – that running these vital systems of record on commodity hardware is perfectly adequate. But when you look at the numbers, you may begin to question that view.

To understand why the right hardware – in this case, IBM Power Systems – can make a real difference, I spoke with Mukta Singh, Director of Data Warehousing at IBM. In my conversation with Mukta, we take a deeper dive into why IBM’s software-defined data warehouse – IBM dashDB Local – on IBM Power Systems offers a better price/performance ratio compared to commodity hardware.

Mona_Blog

Mona Patel: Can you tell our readers a little bit about the Power Architecture? What is so unique about it?

Mukta Singh: IBM Power Systems is the dominant server platform in today’s Unix market, with over 50 percent market share. It has also become a leading platform for Linux systems, and we have seen tremendous growth in that area in recent years.

Unlike commodity servers, which typically use x86 processors, Power servers use IBM’s Power Architecture, a unique processor architecture that has been designed specifically for big data and analytics workloads.

Mona Patel: How does IBM dashDB Local integrate with Power Systems?

Mukta Singh: dashDB Local is a software-defined data warehouse offering that has been optimized for rapid deployment and ease of management. Essentially, the system runs in a Docker container, which means it can be flexibly deployed on different types of hardware either on-premises or in a private or public cloud environment.

One of the options today is to deploy your dashDB Local container on IBM Power Systems – it runs completely transparently, and it’s optimized to allow the dashDB engine to take advantage of the unique features of the Power Architecture.

If you want to move an existing dashDB Local environment from x86 to Power Systems, that’s easy too. The latest-generation POWER8 processors can operate in little-endian (LE) mode, which is the same byte order that x86 processors use. That means that you can move a dashDB container from one platform to the other without making any changes to your applications or data.

At a higher level, we have also ensured that running dashDB on Power Systems offers the same user experience as it does on x86, so the database and OS management, monitoring and integration aspects are exactly the same. The skills are completely transferable from one platform to another, so it’s a free choice and users don’t have to worry about being locked in.

Mona Patel: Can you tell us about the benefits that the Power Architecture provides for dashDB Local?

Mukta Singh: Well, for example, dashDB’s analytics engine is built on IBM BLU Acceleration – a columnar, in-memory technology that cuts query run-times from hours or minutes to just seconds.

BLU Acceleration is designed to take advantage of multi-threaded cores, and Power processors have more threads per core than most current x86 processors. In fact, if you compare an IBM POWER8 processor to an Intel Broadwell EX, it has four times as many threads per core. That means if you have a query that BLU can parallelize, you will get much better performance from Power Systems.

Similarly, because dashDB’s BLU Acceleration does all the processing in-memory, the bandwidth between the processor and the memory is very important. Again, Power Systems has a huge advantage here, with four times as much memory bandwidth as the x86 equivalent.

Finally, the processor’s cache size is important. BLU is engineered to do the majority of its processing in the CPU cache. That means it doesn’t need to repeatedly access data from RAM, which is usually a much slower process. Power processors offer four times as much cache than x86, which means they offer lower latency and reduce the need to access RAM even further. So they play to the strengths of dashDB’s query engine.

Mona Patel: So how do those numbers translate in terms of performance and cost-efficiency?

Mukta Singh: We’ve done a benchmark with dashDB Local of a 24-core POWER8 server versus a 44-core x86 server.

The Power server was 1.2 times faster in terms of throughput, despite having 45 percent fewer cores. Or to look at it another way, each POWER8 core offered 2.2 times more throughput than the x86 equivalent. Leadership performance and competitive pricing for Power scale-out servers deliver a very compelling price-performance-optimized solution with dashDB Local.

Mona Patel: How do you see the market for dashDB Local on Power Systems? Is this something that customers have been asking for?

Mukta Singh: Even when we started bringing dashDB Local to market last year, there were Power clients who were interested. As I mentioned earlier, Power has a dominant share of the Unix market, and there are thousands of companies whose businesses are built on DB2 or Oracle databases running on Power Systems. For companies that rely on Power Systems already, the idea of running dashDB Local on their existing infrastructure is very attractive.

But the results of our benchmark suggests that this isn’t just a good idea for existing Power clients – it’s also an opportunity for new clients to start out running dashDB on a hardware platform that is tailor-made for high-performance analytics.

And for any client who currently runs dashDB on x86 servers, the message we’d like to get across is that it’s easy to move to Power Systems. It’s faster, it’s more cost-effective, and you still get all the ease of use and ease of management that you’re used to with your existing dashDB environment.

Mona Patel: OK, last question: where can our readers go to learn more about dashDB Local on Power? Can they try out dashDB Local on Power Systems before they buy?

Mukta Singh: Yes, we offer a free trial with a Docker ID – please visit dashDB.com to learn more and access the trial.

About Mona,

mona_headshotMona Patel is currently the Portfolio Marketing Manager for IBM dashDB, the future of data warehousing.  With over 20 years of analyzing data at The Department of Water and Power, Air Touch Communications, Oracle, and MicroStrategy, Mona decided to grow her career at IBM, a leader in data warehousing and analytics.  Mona received her Bachelor of Science degree in Electrical Engineering from UCLA.

One Cloud Data Warehouse, Three Ways

by Mona Patel

There’s something very satisfying about using a single, cloud database solution to solve many business problems.  This is exactly what BPM Northwest experiences with IBM dashDB when delivering Data and Analytics solutions to clients worldwide.

The exciting success with dashDB compelled BPM Northwest to share implementations and best practices with IDC.

In the webcast they team up to discuss the value and realities of moving analytical workloads to the cloud.   Challenges around governance, data integration, and skills are also discussed as organizations are very interested and driven to seize the opportunities of a cloud data warehouse.

In the webcast, you will hear three ways that you can utilize IBM dashDB:

  • New applications, with some integration with on-premises systems
  • Self-service, business-driven sandbox
  • Migrating existing data warehouse workloads

After watching the webcast, please think about how IBM dashDB use cases discussed can apply to your challenges and if a hybrid data warehouse is the right solution for you.

Want to give IBM dashDB on Bluemix a try?  Before you sign up for a free trial, take a tutorial tour on the IBM dashDB YouTube channel to learn how to load data from your desktop, enterprise, and internet data sources, and then see how to run simple to complex SQL queries with your favorite BI tool, or integrated R/R Studio. In fact, watch how IBM dashDB integrates with other value added Bluemix services such as Dataworks Lift and Watson Analytics so that you can bring together all relevant data sources for newer insights.

mona_blog

About Mona,

mona_headshotMona Patel is currently the Portfolio Marketing Manager for IBM dashDB, the future of data warehousing.  With over 20 years of experince analyzing data at The Department of Water and Power, Air Touch Communications, Oracle, and MicroStrategy, Mona decided to grow her career at IBM, a leader in data warehousing and analytics.  Mona received her Bachelor of Science degree in Electrical Engineering from UCLA.

Start Small and Move Fast: The Hybrid Data Warehouse

by Mona Patel

In the world of cutting edge big data analytics, the same obstacles in gaining meaningful insight still exists – ease of getting data in and getting data out.  To address these long standing issues, the utmost flexibility is needed, especially when layered with the agile needs of the business.

Why spend millions of dollars replacing your data and analytics environment with the latest technology promise to address these issues, when can you to leverage existing investments, resources, and skills to achieve the same, and sometimes better, insight?

Consider a hybrid data warehouse.  This approach allows you to start small and move fast. It provides the best of both worlds – flexibility and agility without breaking the bank.  You can RAPIDLY serve up quality data managed by your data warehouse, blended with newer data sources and data types in the cloud, and apply integrated analytics such as Spark or R – all without additional IT resources and expertise.  How is this possible?  IBM dashDB.

Read Aberdeen’s latest report on The Hybrid Data Warehouse.

mona's blog

 

Watch Aberdeen Group’s Webcast on The Hybrid Data Warehouse.

Let me give you an example.  We live in a digital world, with organizations now very interested in improving customer data capture across mobile, web, IoT, social media, and more for newer insights.  A telecommunications client was facing heavy competition and wanted to quickly deliver unique mobile services for an upcoming event in order to acquire new customers by collecting and analyzing mobile and social media data.  Taking a hybrid data warehouse approach, the client was able to start small and move fast, uncovering new mobile service options.

Customer information generated from these newer data sources were blended together with existing customer data managed in the data warehouse to deliver newer insights.  IBM dashDB provided a high performing, public cloud data warehouse service that was up and running in minutes.  Automatic transformation of unstructured geospatial data into structured data, in-memory columnar processing, in-database geospatial analytics, integration with Tableau, and pricing were some of the key reasons IBM dashDB was chosen.

This brings me back to my first point – you don’t have to spend millions of dollars to capitalize on getting data in and getting data out.  For example, clients like the one described above took advantage of Cloudant JSON document store integration, enabling them to rapidly get data into IBM dashDB with ease– no ETL processing required.  Automatic schema discovery loads and replicates unstructured JSON documents that capture IoT, Web and mobile-based data into a structured format.  Getting data or information out was simple, as IBM dashDB provides in-database analytics and the use of familiar, integrated SQL based tools such as Cognos, Watson Analytics, Tableau, and Microstrategy.  I can only conclude that IBM dashDB is a great example of how a highly compatible cloud database can extend or modernize your on-premises data warehouse into a hybrid one to meet time-sensitive business initiatives.

What exactly is a hybrid data warehouse?  A hybrid data warehouse introduces technologies that extend the traditional data warehouse to provide key functionality required to meet new combinations of data, analytics and location, while addressing the following IT challenges:

  • Deliver new analytic services and data sets to meet time-sensitive business initiatives
  • Manage escalating costs due to massive growth in new data sources, analytic capabilities, and users
  • Achieve data warehouse elasticity and agility for ALL business data

mona_dashDB

Still not convinced on the power of a hybrid data warehouse?  Hear what Aberdeen Group’s expert Michael Lock has to say in this 30 min webcast.

About Mona,

mona_headshot

Mona Patel is currently the Portfolio Marketing Manager for IBM dashDB, the future of data warehousing.  With over 20 years of analyzing data at The Department of Water and Power, Air Touch Communications, Oracle, and MicroStrategy, Mona decided to grow her career at IBM, a leader in data warehousing and analytics.  Mona received her Bachelor of Science degree in Electrical Engineering from UCLA.

How dashDB Helps Media Channels Boost Revenues And Viewership

By Harsimran Singh Labana

Did you ever wonder how a media channel decides which ad comes at what time? Well, there is an analytics science behind this.

Cable and broadcast networks pay studios large sums of money for the right to broadcast a specific show or movie at specific times on specific channels. To achieve a return on that investment, networks must design TV schedules and promotional campaigns to maximize viewership and boost advertising revenues.

RSG Media is an IBM dashDB managed service client that partners with cable and broadcast, entertainment, games and publishing firms to provide insights that help maximize revenue from content, advertising and marketing inventories. Shiv Sehgal, Solutions Architect, RSG Media says, “We had the rights data, the scheduling data and the advertising revenues data. If we could combine this with viewership and social media data, we could give our clients a true 360-degree view of their operations and profitability, down to the level of individual broadcasts. The missing piece of the puzzle was to build a data and analytics capability that could bring all the data together and turn it into business insight – and that’s where IBM came in.”

RSG Media chose IBM because of its complete vision for cloud analytics. This includes an integrated set of solutions for building advanced analytics applications and coordinating them with all the relevant data services in the cloud.

RSG Media’s Big Knowledge Platform is built on the IBM® Cloudant® NoSQL document store and the IBM dashDB™ data warehouse service, orchestrated through the IBM Bluemix® cloud application development platform. Cloudant’s Schema Discovery Process (SDP) is used to ingest and translate semi-structured data from more than 50 sources, and structure that data into a schema that the dashDB relational data warehouse understands.

RSG Media is not stopping here and they are excited about Watson Analytics and how it predicts customer behavior.  Learn more about RSG Media success using dashDB and Cloudant solutions on Bluemix.

About Harsimran,
HarryHarsimran Singh Labana is the Portfolio Marketing Manager for IBM’s Data Warehousing team. Working in a worldwide role he ensures marketing support for IBM’s solutions. He has been with IBM for close to five years working in diverse roles like sales and social media marketing. He stays in Bangalore, India with his wife and son.

Things you need to know when switching from Oracle database to Netezza (Part 3)

by Andrey Vykhodtsev

In my previous two posts I covered the differences in architecture between IBM PureData System for Analytics and Oracle Database, as well as differences in SQL. (See below for links.) In this post, I am going to cover another important topic – additional structures that speed-up data access.

Partitions, Indexes, Materialized Views

Oracle database relies on Indexes, Partitions and Materialized views for performance. In Oracle, indexes are designed 19712947_s_blue data arrow backgroundto speed-up point searches or range searches that touch a very small percentage of the data. Because of the B-Tree index structure, if you touch a large percentage of the data, using the index will be much slower than the full scan of the whole table. If you have this problem, then you probably have decided to use partitioning. In Oracle, Partitioning is a paid feature that goes only with certain editions. You also have Materialized views with which you can put results of the complex queries on disk for later re-use. These structures are designed with general purpose (analytical processing + transactional processing ) in mind, and can be complex and unwieldy to maintain.

By contrast, with PureData you have fewer worries. The trade-off, as I said in my first post, is that PureData is not a general-purpose system, but rather an analytical-processing system.

We use ZoneMaps in PureData instead of indexes. In essence, a ZoneMap is just a table of minimum and maximum values for all columns that have certain types. ZoneMaps are extremely compact, and they don’t need to be created or maintained. But this is not all. ZoneMap filtering takes place at the hardware level. (Remember mention of FPGA, Field Programmable Gate Arrays in my first post?) The system will not scan data that does not need to be scanned for a particular query. Therefore I/O is greatly reduced. If you update data or delete data based on a condition, ZoneMaps also are taken into account.

Because of ZoneMaps, you don’t need to partition your data. ZoneMaps take advantage of the natural ordering of data. For example, if you insert data daily, ZoneMap on the date field will become completely sorted. Range searches on this field will be extremely fast.

In addition to ZoneMaps, there are couple of other techniques you can use to optimize query access to a certain table. First is called CBT, Clustered Based Table. This is not a separate structure that needs to be maintained, but rather an internal table organization method. If you choose a table to be CBT, you can provide up to 4 fields, on which you will have extremely fast searches.

The only additional structure that PureData has is called “Materialized View”, but this is a bit different concept than in Oracle. In PureData, materialized view is a subset of columns from one table that can be sorted differently than the base table, therefore speeding up access on the sorted columns. Because materialized views are ZoneMapped, they have some properties of the indices, but they are not actually indices. Materialized views might be needed if you have “tactical queries”, queries that require fast and frequent access to small portions of data. Otherwise, you don’t usually need them.

In Conclusion

As you see, in PureData it is much simpler to maintain efficient data access. Instead of creating and maintaining indexes for the subset of columns on each table, PureData automatically creates ZoneMaps for you. I know from experience what a nightmare index maintenance in a large data warehouse might be. Partitioning is another technique that is not needed in PureData. Instead of indexes and partitions, we use much simpler structures, that are automatically maintained, and applied on hardware level (in FPGA), with the speed of streaming data.
In  my next posts, I am going to cover a few more topics that you need to be aware of when migrating from Oracle to PDA. Please stay tuned, and follow me on Twitter: @vykhand

Other posts in this series

About Andrey,
Andrey VykhodtsevAndrey Vykhodtsev is Big Data Technical Sales Expert covering Central and Eastern Europe Region in IBM. He has more than 12 years of experience in Data Warehousing and Analytics, and has worked as senior data warehouse developer, analyst, architect, consultant in multiple industries, including Financial sector and Telecommunications.

Leveraging In-Memory Computing For Fast Insights

By Louis T Cherian,

It is common knowledge that an in-memory database is fast, but what if you had an even faster solution?
Think of a next generation in-memory database, which is

  • Faster, with speed of thought analytics to get insights
  • Simpler, with reduced complexity and improved performance
  • Agile, with multiple deployment options and low risk for migration
  • Competitive, by delivering products to market much faster

We are talking about combination of innovations that make IBM BLU Acceleration, the next generation in-memory solution.

So, what really goes into making IBM BLU Acceleration, the next generation in-memory solution?

  • The in-chip analytics allows the data to flow through the CPU very quickly, making it faster than “conventional” in-memory solutions
  • With actionable compression, one can perform a broad range of operations on data, while it is still compressed
  • With data skipping, any data that is not needed to be touched to answer a query is skipped over and that results in dramatic performance improvements
  • The ability to run all operational reports on transactional data as it is captured with the help of shadow tables ,  arguably  the most notable feature in the  DB2 10.5 “Cancun Release”

To know more about leveraging in-memory computing for fast insights with IBM BLU Acceleration, watch this video: http://bit.ly/1BZq1lo

For more information, visit : http://www-01.ibm.com/software/data/data-management-and-data-warehousing/dw.html

About Louis T. Cherian,

Louis T. Cherian is currently a member of the worldwide product marketing team at IBM that focuses on data warehouse and database technology. Prior to this role, Louis has held a variety of product marketing roles within IBM, and in Tata Consultancy Services, prior to joining IBM.  Louis has done his PGDBM from Xavier Institute of Management and Entrepreneurship, and also has an engineering degree in computer science from VTU Bangalore.

IBM’s Point of View on Data Warehouse Modernization  

By Louis T. Cherian,

The world of Data Warehousing continues to evolve, with an unimaginable amount of data being produced each moment and advancement of technologies that allow us to consume this data.  This provides new capabilities for organizations to make better informed business decisions, faster.
To take advantage of this opportunity in today’s era of Big Data and the Internet of things, our customers really need to have a solid Data Warehouse modernization strategy. Organizations should look to optimize with new technology and capabilities like:

  • in-memory databases,  to speed analytics,
  • Hadoop to analyze unstructured data to enhance existing analytics,
  • Data warehouse appliances with improved capabilities and performance

To understand more about the importance of Data Warehouse Modernization and to get answers to questions like:

  • What is changing in the world of Data Warehousing?
  • Why should customers act now and what should they do?
  • What is the need for companies to modernize their Data Warehouse?
  • How are IBM Data Warehousing Solutions able to address the need of Data Warehouse Modernization?

Watch this video by the IBM Data Warehousing team to know more about the breadth and depth of IBM Data Warehouse solutions. For more information, you can visit our website .

About Louis T. Cherian,

Louis T. Cherian is currently a member of the worldwide product marketing team at IBM that focuses on data warehouse and database technology. Prior to this role, Louis has held a variety of product marketing roles within IBM, and in Tata Consultancy Services, prior to joining IBM.  Louis has done his PGDBM from Xavier Institute of Management and Entrepreneurship, and also has an engineering degree in computer science from VTU Bangalore.