By Bill O’Connell,
The term “Big Data” has a lot of hype in the industry. But of course, the term is always fun to discuss in the many cool and interesting use cases that we read about within this hype.
However, as an IT focused engineer needing to both
(i) steer IBM technology direction as well as
(ii) work with our clients on their enterprise design to address shifting business analytical needs using IBM technology
… one thing is very clear. There is a fundamental shift occurring in the industry forcing client organizations to re-think and shift how their enterprises are designed. And in doing so, look at what technology will helps them achieve this fundamental shift the fastest on this journey.
Case in point, if you are already hearing phrases such as “IT is too slow“, “it takes too long for the business users to get access to data“, … or, you are seeing the business simply pull data from various IT supplied systems and working outside of IT, …. then IT is already a problem and is no longer a catalyst to address the business’s growing need for even faster business agility. Moreover, we of course are see Forrester within the last couple years talking on “Information Fabric 3.0”. Gartner talking on “Logical Data Warehouse” — in reality, both are really addressing the same issue on business agility of data access for the business data for enhance their analytics more easily. The bottom line, the business simply needs holistic self-service, self-provisioning, collaboration of data, and data curation of “their data” — in reality, IT is just the caretaker of the business’s data.
So, how do we fix this as IT? The answer is simple, right? It is to consolidate more … and what technology will allow that? That will of course fix this problem and make more integrated available faster to the business. Then, they will of course stop pressuring us in IT for access to more and more data that they need. Well … WRONG.
So, to net this out on what is happening, IBM is seeing two competing forces which are causing a friction zone between IT and the Business in addressing the above. Furthermore, Big Data (which to me simply means access to all business data – within appropriate security roles, et al) is making this worse since the business simply wants access to more types of data to allow them to make better business decisions. It is their data in the end – and they cannot find much of it in most cases.
Hence, two competing forces — of which we (IT) is challenged to solve both at the same time :
(i) Business speed and agility for each business area independent of the others
- The business areas need to operate as silos and independent to react and adapt fast to market changes ..
(ii) Business access to holistic data across business areas so that business areas can understand their customer better, et al,
- The business needs to operate such that is sees all data across the enterprise (aka, more monolithic versus silo across the business areas)
By the way, these are completing enterprise design discussions (monolithic versus silo) – this Session will address how to solve this and be a technology enabler for the business to move faster and cheaper.
The problem is that more times than not, IT historically tries to solve these enterprise designs the only way we historically have always done them … consolidate. Well why not, that is what the business is really telling IT, right? The answer is Yes, to a point. However, IT historically over consolidates trying to address the business business pressures. Over consolidation has the reverse effect for the business. Hence, then we start not only hear the “to slow” comments from the business, worse we hear the business saying that IT is too also way too expense and the business is not seeing the value from the $$$ spent by IT to support them!
Can you blame the business? No. Worse, in many cases, some parts of IT may be oblivious to this happening – which means they are in denial and or in their IT world and sitting there happy as a clam, while as the business is bypassing them and moving forward without them.
We call this the friction area between IT and the business.
So what do we as IT do about this? Specifically, both on (a) re-thinking enterprise design and (b) what technology is needed? Furthermore, what is IBM doing with Watson Foundations technology to meet this need? Technology is changing fast and on an even faster pace of change – especially with open source. The good news is that this change is allowing IT to be more forward thinking to re-think their enterprise design to address both of those competing forces at same time and doing so in a more cost effective way.
So, this session will address the above – it will …
- talk to enterprise design patterns for “to-be” states to support analytics within the above issues – including dealing with multi-division / multi-lines of business designs
- talk to IBM technology for enhancing core data warehousing, exploration / discovery, and 360 degree analytics in these more efficient enterprise design patterns.
- talk to IBM technology on abstracting the “friction” area to provide holistic business self-service/self-provisioning/data curation for agility/speed in a federated / silo world
- talk to IBM technology mapping to an emerging Hybrid Cloud enterprise – what goes in a [private] cloud and what data goes on-premises for the business’s analytics.
- talk to IBM technology engineering trends on the above IBM portfolio discussed.
The IBM portfolio discussed will be Watson Foundations and include among others components … Data Warehouse appliances (e.g., PureData for Analytics, …), BigInsights, 360 Analytics around Master Data, as well as finally InfoSpheres move into business level Self-Service/Self-Provisioning/Business driven data curation, … and more for driving business analytics.
Bill O’Connell is a Distinguished Engineer and Data Warehousing Chief Technical Officer for IBM Information Management (IM). Bill is responsible for the Big Data Analytics as well as the governance architecture strategy across both the IM as well as its integration across IBM’s Content and Analytical Organizations. This includes aligning IBM Analytical Software and Hardware Systems Group. In this role, Bill oversees the Big Data analytics and integration as well as technical road-map for IM analytics.
Prior to joining IBM, Bill was a Senior Engineer Architect on NCR’s Teradata Database System, as well as a research member on MPP database engineering at AT&T Bell Labs in the 1980s and early 1990s. Bill also has been an adjunct Professor teaching database engineering for over 15 years at such schools as Princeton University and University of Toronto. He received his Doctorates at Illinois Institute of Technology.