One Cloud Data Warehouse, Three Ways

by Mona Patel

There’s something very satisfying about using a single, cloud database solution to solve many business problems.  This is exactly what BPM Northwest experiences with IBM dashDB when delivering Data and Analytics solutions to clients worldwide.

The exciting success with dashDB compelled BPM Northwest to share implementations and best practices with IDC.

In the webcast they team up to discuss the value and realities of moving analytical workloads to the cloud.   Challenges around governance, data integration, and skills are also discussed as organizations are very interested and driven to seize the opportunities of a cloud data warehouse.

In the webcast, you will hear three ways that you can utilize IBM dashDB:

  • New applications, with some integration with on-premises systems
  • Self-service, business-driven sandbox
  • Migrating existing data warehouse workloads

After watching the webcast, please think about how IBM dashDB use cases discussed can apply to your challenges and if a hybrid data warehouse is the right solution for you.

Want to give IBM dashDB on Bluemix a try?  Before you sign up for a free trial, take a tutorial tour on the IBM dashDB YouTube channel to learn how to load data from your desktop, enterprise, and internet data sources, and then see how to run simple to complex SQL queries with your favorite BI tool, or integrated R/R Studio. In fact, watch how IBM dashDB integrates with other value added Bluemix services such as Dataworks Lift and Watson Analytics so that you can bring together all relevant data sources for newer insights.

mona_blog

About Mona,

mona_headshotMona Patel is currently the Portfolio Marketing Manager for IBM dashDB, the future of data warehousing.  With over 20 years of experince analyzing data at The Department of Water and Power, Air Touch Communications, Oracle, and MicroStrategy, Mona decided to grow her career at IBM, a leader in data warehousing and analytics.  Mona received her Bachelor of Science degree in Electrical Engineering from UCLA.

Driving Analytics with Common Architectural Pattern ; Focused on Data Warehousing in the Era of Big Data

By Bill O’Connell, 

The term “Big Data” has a lot of hype in the industry. But of course, the term is always fun to discuss in the many cool and interesting use cases that we read about within this hype.

However, as an IT focused engineer needing to both

(i) steer IBM technology direction as well as

(ii) work with our clients on their enterprise design to address shifting business analytical needs using IBM technology

… one thing is very clear.   There is a fundamental shift occurring in the industry forcing client organizations to re-think and shift how their enterprises are designed. And in doing so, look at what technology will helps them achieve this fundamental shift the fastest on this journey.

Case in point, if you are already hearing phrases such as “IT is too slow“, “it takes too long for the business users to get access to data“, … or, you are seeing the business simply pull data from various IT supplied systems and working outside of IT, …. then IT is already a problem and is no longer a catalyst to address the business’s growing need for even faster business agility.  Moreover, we of course are see Forrester within the last couple years talking on “Information Fabric 3.0”. Gartner talking on “Logical Data Warehouse” — in reality, both are really addressing the same issue on business agility of data access for the business data for enhance their analytics more easily. The bottom line, the business simply needs holistic self-service, self-provisioning, collaboration of data, and data curation of “their data” — in reality, IT is just the caretaker of the business’s data.

So, how do we fix this as IT? The answer is simple, right? It is to consolidate more … and what technology will allow that? That will of course fix this problem and make more integrated available faster to the business. Then, they will of course stop pressuring us in IT for access to more and more data that they need. Well … WRONG.

So, to net this out on what is happening, IBM is seeing two competing forces which are causing a friction zone between IT and the Business in addressing the above. Furthermore, Big Data (which to me simply means access to all business data – within appropriate security roles, et al) is making this worse since the business simply wants access to more types of data to allow them to make better business decisions. It is their data in the end – and they cannot find much of it in most cases.

Hence, two competing forces — of which we (IT) is challenged to solve both at the same time :

(i) Business speed and agility for each business area independent of the others

  • The business areas need to operate as silos and independent to react and adapt fast to market changes ..

(ii) Business access to holistic data across business areas so that business areas can understand their customer better, et al,

  • The business needs to operate such that is sees all data across the enterprise (aka, more monolithic versus silo across the business areas)

By the way, these are completing enterprise design discussions (monolithic versus silo) – this Session will address how to solve this and be a technology enabler for the business to move faster and cheaper.

The problem is that more times than not, IT historically tries to solve these enterprise designs the only way we historically have always done them … consolidate. Well why not, that is what the business is really telling IT, right? The answer is Yes, to a point. However, IT historically over consolidates trying to address the business business pressures. Over consolidation has the reverse effect for the business.  Hence, then we start not only hear the “to slow” comments from the business, worse we hear the business saying that IT is too also way too expense and the business is not seeing the value from the $$$ spent by IT to support them!

Can you blame the business? No. Worse, in many cases, some parts of IT may be oblivious to this happening – which means they are in denial and or in their IT world and sitting there happy as a clam, while as the business is bypassing them and moving forward without them.

We call this the friction area between IT and the business.

So what do we as IT do about this?  Specifically, both on (a) re-thinking enterprise design and (b) what technology is needed? Furthermore, what is IBM doing with Watson Foundations technology to meet this need?  Technology is changing fast and on an even faster pace of change – especially with open source. The good news is that this change is allowing IT to be more forward thinking to re-think their enterprise design to address both of those competing forces at same time and doing so in a more cost effective way.

So, this session will address the above – it will …

  • talk to enterprise design patterns for “to-be” states to support analytics within the above issues – including dealing with multi-division / multi-lines of business designs
  • talk to IBM technology for enhancing core data warehousing, exploration / discovery, and 360 degree analytics in these more efficient enterprise design patterns.
  • talk to IBM technology on abstracting the “friction” area to provide holistic business self-service/self-provisioning/data curation for agility/speed in a federated / silo world
  • talk to IBM technology mapping to an emerging Hybrid Cloud enterprise – what goes in a [private] cloud and what data goes on-premises for the business’s analytics.
  • talk to IBM technology engineering trends on the above IBM portfolio discussed.

The IBM portfolio discussed will be Watson Foundations and include among others components … Data Warehouse appliances (e.g., PureData for Analytics, …), BigInsights, 360 Analytics around Master Data, as well as finally InfoSpheres move into business level Self-Service/Self-Provisioning/Business driven data curation, … and more for driving business analytics.

Session Details:

About Bill,

Bill O’Connell is a Distinguished Engineer and Data Warehousing Chief Technical Officer for IBM Information Management (IM). Bill is responsible for the Big Data Analytics as well as the governance architecture strategy across both the IM as well as its integration across IBM’s Content and Analytical Organizations. This includes aligning IBM Analytical Software and Hardware Systems Group.  In this role, Bill oversees the Big Data analytics and integration as well as technical road-map for IM analytics.

Prior to joining IBM, Bill was a Senior Engineer Architect on NCR’s Teradata Database System, as well as a research member on MPP database engineering at AT&T Bell Labs in the 1980s and early 1990s. Bill also has been an adjunct Professor teaching database engineering for over 15 years at such schools as Princeton University and University of Toronto. He received his Doctorates at Illinois Institute of Technology.