Modern Enterprise Business Intelligence and Data Management -  Alan Simon

Modern Enterprise Business Intelligence and Data Management (eBook)

A Roadmap for IT Directors, Managers, and Architects

(Autor)

eBook Download: PDF | EPUB
2014 | 1. Auflage
96 Seiten
Elsevier Science (Verlag)
978-0-12-801745-6 (ISBN)
Systemvoraussetzungen
Systemvoraussetzungen
27,95 inkl. MwSt
  • Download sofort lieferbar
  • Zahlungsarten anzeigen
Nearly every large corporation and governmental agency is taking a fresh look at their current enterprise-scale business intelligence (BI) and data warehousing implementations at the dawn of the 'Big Data Era'...and most see a critical need to revitalize their current capabilities. Whether they find the frustrating and business-impeding continuation of a long-standing 'silos of data' problem, or an over-reliance on static production reports at the expense of predictive analytics and other true business intelligence capabilities, or a lack of progress in achieving the long-sought-after enterprise-wide 'single version of the truth' - or all of the above - IT Directors, strategists, and architects find that they need to go back to the drawing board and produce a brand new BI/data warehousing roadmap to help move their enterprises from their current state to one where the promises of emerging technologies and a generation's worth of best practices can finally deliver high-impact, architecturally evolvable enterprise-scale business intelligence and data warehousing. Author Alan Simon, whose BI and data warehousing experience dates back to the late 1970s and who has personally delivered or led more than thirty enterprise-wide BI/data warehousing roadmap engagements since the mid-1990s, details a comprehensive step-by-step approach to building a best practices-driven, multi-year roadmap in the quest for architecturally evolvable BI and data warehousing at the enterprise scale. Simon addresses the triad of technology, work processes, and organizational/human factors considerations in a manner that blends the visionary and the pragmatic. - Takes a fresh look at true enterprise-scale BI/DW in the 'Dawn of the Big Data Era' - Details a checklist-based approach to surveying one's current state and identifying which components are enterprise-ready and which ones are impeding the key objectives of enterprise-scale BI/DW - Provides an approach for how to analyze and test-bed emerging technologies and architectures and then figure out how to include the relevant ones in the roadmaps that will be developed - Presents a tried-and-true methodology for building a phased, incremental, and iterative enterprise BI/DW roadmap that is closely aligned with an organization's business imperatives, organizational culture, and other considerations

Alan Simon is a Senior Lecturer in the Information Systems Department at Arizona State University's WP Carey School of Business. He is also the Managing Principal of Thinking Helmet, Inc., a boutique consultancy specializing in enterprise business intelligence and data management architecture. Alan has authored or co-authored 29 technology and business books dating back to 1985. He has previously led national or global BI and data warehousing practices at several consultancies, and has provided enterprise data management architecture and roadmap services to more than 40 clients dating back to the early 1990s. From 1987-1992 Alan was a software developer and product manager with Digital Equipment Corporation's Database Systems Group, and earlier he was a United States Air Force Computer Systems Officer stationed at Cheyenne Mountain, Colorado.Alan received his Bachelor's Degree from Arizona State University and his Master's Degree from the University of Arizona, and is a native of Pittsburgh.
Nearly every large corporation and governmental agency is taking a fresh look at their current enterprise-scale business intelligence (BI) and data warehousing implementations at the dawn of the "e;Big Data Era"e;...and most see a critical need to revitalize their current capabilities. Whether they find the frustrating and business-impeding continuation of a long-standing "e;silos of data"e; problem, or an over-reliance on static production reports at the expense of predictive analytics and other true business intelligence capabilities, or a lack of progress in achieving the long-sought-after enterprise-wide "e;single version of the truth"e; - or all of the above - IT Directors, strategists, and architects find that they need to go back to the drawing board and produce a brand new BI/data warehousing roadmap to help move their enterprises from their current state to one where the promises of emerging technologies and a generation's worth of best practices can finally deliver high-impact, architecturally evolvable enterprise-scale business intelligence and data warehousing. Author Alan Simon, whose BI and data warehousing experience dates back to the late 1970s and who has personally delivered or led more than thirty enterprise-wide BI/data warehousing roadmap engagements since the mid-1990s, details a comprehensive step-by-step approach to building a best practices-driven, multi-year roadmap in the quest for architecturally evolvable BI and data warehousing at the enterprise scale. Simon addresses the triad of technology, work processes, and organizational/human factors considerations in a manner that blends the visionary and the pragmatic. - Takes a fresh look at true enterprise-scale BI/DW in the "e;Dawn of the Big Data Era"e;- Details a checklist-based approach to surveying one's current state and identifying which components are enterprise-ready and which ones are impeding the key objectives of enterprise-scale BI/DW- Provides an approach for how to analyze and test-bed emerging technologies and architectures and then figure out how to include the relevant ones in the roadmaps that will be developed- Presents a tried-and-true methodology for building a phased, incremental, and iterative enterprise BI/DW roadmap that is closely aligned with an organization's business imperatives, organizational culture, and other considerations

Chapter 1

The Rebirth of Enterprise Data Management


Abstract


Tremendous interest exists today in enterprise data management, largely because of the “dawn of the Big Data era.” Yet organizations have been pursuing the idea of well-architected data management at the enterprise level since the 1960s and 1970s, largely with very little lasting success. Understanding the timeline and history of enterprise data management is essential to making intelligent architecture decisions with today’s and tomorrow’s new technologies and to avoid repeating mistakes of the past. From early visions of a single common, mainframe-hosted “data base” to the consequences of Y2K efforts, a comprehensive yet concise history of enterprise data management is presented.

Keywords


Data
Enterprise Data
Enterprise Data History
Enterprise Data Trends
Big Data
Data Warehousing
Business Intelligence
Predictive Analytics

1.1. In the beginning: how we got to where we are today


Those who cannot remember the past, are condemned to repeat it.

- George Santayana (1863–1952)
To best understand the state of enterprise data management (EDM) today, it’s important to understand how we arrived at this point during a journey that dates back nearly 50 years to the days when enormous, expensive mainframe computers were the backbone of “data processing” (as Information Technology was commonly referred to long ago) and computing technology was still in its adolescence.

1.1.1. 1960s and 1970s


Many data processing textbooks of the 1960s and 1970s proposed a vision much like that depicted in Figure 1.1.
Fig. 1.1 1960s/1970s vision of a common “data base.”
The simplified architecture envisioned by many prognosticators called for a single common “data base”1 that would provide a single primary store of data for core business applications such as accounting (general ledger, accounts payable, accounts receivable, payroll, etc.), finance, personnel, procurement, and others. One application might write a new record into the data base that would then be used by another application.
In many ways, this “single data base” vision is similar to the capabilities offered today by many enterprise systems vendors in which a consolidated store of data underlies enterprise resource planning (ERP), customer relationship management (CRM), supply chain management (SCM), human capital management (HCM), and other applications that have touch-points with one another. Under this architecture the typical company or governmental agency would face far fewer conflicting data definitions and semantics; conflicting business rules; unnecessary data duplication; and other hindrances than what is found in today’s organizational data landscape.
Despite this vision of a highly ordered, quasi-utopian data management architecture, the result for most companies and governmental agencies looked far more like the diagram in Figure 1.2, with each application “owning” its own file systems, tapes, and first-generation database management systems (DBMSs).
Fig. 1.2 The reality of most 1960s/1970s data environments.
Even when an organization’s portfolio of applications was housed on a single mainframe, the vision of a shared pool of data among those applications was typically nowhere in the picture. However, the various applications – many of which were custom-written in those days – still needed to share data among themselves. For example, Accounts Receivable and Accounts Payable applications needed to feed data into the General Ledger application. Most organizations found themselves rapidly slipping into the “spider’s web quagmire” of numerous one-by-one data exchange interfaces as depicted in Figure 1.3.
Fig. 1.3 Ungoverned data integration via proliferating one-by-one interfaces.
By the time the 1970s drew to a close and computing was becoming more and more prevalent within business and government, any vision of managing one’s data assets at an enterprise level was far from a reality for most organizations. Instead, a world of uncoordinated, often conflicting data silos was what we were left with.

1.1.2. 1980s


As the 1980s progressed, the data silo problem actually began to worsen. Minicomputers had been introduced in the 1960s and had grown in popularity during the 1970s, led by vendors such as Digital Equipment Corporation (DEC) and Data General. Increasingly, the fragmentation of both applications and data moved from the realm of the mainframe into minicomputers as organizations began deploying core applications on these newer, smaller-scale platforms. Consequently, the one-by-one file transfers and other types of data exchange depicted in Figure 1.3 were now increasingly occurring across hardware, operating system platforms, and networks, many of which were only beginning to “talk” to one another. As the 1980s proceeded and personal computers (often called “microcomputers” at the time) grew wildly in popularity, the typical enterprise’s data architecture grew even more fragmented and chaotic.
Many organizations realized that they now were facing a serious problem with their fragmented data silos, as did many of the leading technology vendors. Throughout the 1980s, two major approaches took shape in an attempt to overcome the fragmentation problem:
 Enterprise data models
 Distributed database management systems (DDBMSs)

1.1.2.1. Enterprise Data Models

Companies and governmental agencies attempted to get their arms around their own data fragmentation problems by embarking on enterprise data model initiatives. Using conceptual and logical data modeling techniques that began in the 1970s such as entity-relationship modeling, teams of data modelers would attempt to understand and document the enterprise’s existing data elements and attributes as well as the details of relationships among those elements. The operating premise governing these efforts was that by investing the time and resources to analyze, understand, and document all of the enterprise’s data across any number of barriers – application, platform, and organizational, in particular – the “data chaos” would begin to dissipate and new systems could be built leveraging the data structures, relationships, and data-oriented business rules that already existed.
While many enterprise data modeling initiatives did produce a better understanding of an organization’s data assets than before a given initiative had begun, these efforts largely withered over time and tended not to yield anywhere near the economies of scale originally envisioned at project inception. The application portfolio of the typical organization in the 1980s was both fast-growing and very volatile, and an enterprise data modeling initiative almost certainly fell behind new and rapidly changing data under the control of any given application or system. The result even before completion, most enterprise data models became “stale” and outdated, and were quietly mothballed.
(As most readers know, data modeling techniques are still widely used today, although primarily as part of the up-front analysis and design phase for a specific software development or package implementation project rather than attempting to document the entire breadth of an enterprise’s data assets.)

1.1.2.2. Distributed Database Management Systems (DDBMSs)

Enterprise data modeling efforts on the parts of companies and governmental agencies were primarily an attempt to understand an organization’s highly fragmented data. The data models themselves did nothing to help facilitate the integration of data across platforms, databases, organizational boundaries, etc.
To address the data fragmentation problem from an integration perspective, most of the leading computer companies and database vendors of the 1980s began work on DDBMSs. The specific technical approaches from companies such as IBM (Information Warehouse), Digital Equipment Corporation (RdbStar), Ingres (Ingres Star), and others varied from one to another, but the fundamental premise of most DDBMS efforts was as depicted in Figure 1.4.
Fig. 1.4 The DDBMS concept.
The DDBMS story went like this: regardless of how scattered an organization’s data might be, a single data model-driven interface could sit between applications and end-users and the underlying databases, including those from other vendors operating under different DBMSs (#2 and #3 in Figure 1.4). The DDBMS engine would provide location and platform transparency to abstract applications and users from the underlying data distribution and heterogeneity, and both read-write access as well as read-only access to the...

Erscheint lt. Verlag 28.8.2014
Sprache englisch
Themenwelt Mathematik / Informatik Informatik Datenbanken
Informatik Office Programme Outlook
Mathematik / Informatik Informatik Programmiersprachen / -werkzeuge
Sozialwissenschaften Kommunikation / Medien Buchhandel / Bibliothekswesen
ISBN-10 0-12-801745-7 / 0128017457
ISBN-13 978-0-12-801745-6 / 9780128017456
Haben Sie eine Frage zum Produkt?
PDFPDF (Adobe DRM)
Größe: 4,5 MB

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: PDF (Portable Document Format)
Mit einem festen Seiten­layout eignet sich die PDF besonders für Fach­bücher mit Spalten, Tabellen und Abbild­ungen. Eine PDF kann auf fast allen Geräten ange­zeigt werden, ist aber für kleine Displays (Smart­phone, eReader) nur einge­schränkt geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Zusätzliches Feature: Online Lesen
Dieses eBook können Sie zusätzlich zum Download auch online im Webbrowser lesen.

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

EPUBEPUB (Adobe DRM)
Größe: 3,3 MB

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belle­tristik und Sach­büchern. Der Fließ­text wird dynamisch an die Display- und Schrift­größe ange­passt. Auch für mobile Lese­geräte ist EPUB daher gut geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Zusätzliches Feature: Online Lesen
Dieses eBook können Sie zusätzlich zum Download auch online im Webbrowser lesen.

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich