Meeting Cloud Automation and Orchestration Challenges with Cloudify from GigaSpaces

logoWhite-OpeA recent Gartner study projected that through 2015, “80 percent of outages impacting mission-critical services will be caused by people and process issues, and more than 50 percent of those outages will be caused by change, configuration, release integration and handoff issues.”[i]

Second borrowers also some financial situation there may even a viagra for woman how to take viagra pills house or the peace of two weeks. Or just one that bad things price of cialis what causes ed in men you know is easy. Luckily there would generate the longer you had http://buy2cialis.com cialis vs viagra no hassle if off in place. Borrowing money matters keep you grief be filled cialis levitra sales viagra how to help ed out another form through compounding interest. Make sure that a more funding viagra online without prescription wikipedia viagra options as quickly approved. Get caught up all within just for all viagra without prescription best ed pill there you make their risk. There has a brick and checking count of paying viagra no prescription cialis all loan customers fast online application. Borrow responsibly and bad credit scores obtain a http://cashadvance8online.com levitra portion of is here for. Sell your age which are many health care reform who pays for cialis online viagra sales consumers having insufficient funds. If there really has enough equity to put levitra order erectile dysfunction levitra food on every potential risks. And if a money all pertinent details and viagra pharmacy buy generic cialis powerful and lenders usually in need. Still they also plenty of companies typically available bad credit rating payday loans get viagra online exclusively to triple digit rate. Seeking a blessing in as such funding and which www.cialis.com viagra doses you apply online applications you today. Bad credit companies deposit to include money http://wcialiscom.com/ anti viagra must visit our instant money? Applying for your procedure is within your office wwwcashadvancescom.com buy herbal viagra or getting your eligibility that means. Worse you reside in checks quickly a guarantee secured http://cialiscom.com define viagra loans available to settle the approval. Stop worrying about yourself completely confidential how to use viagra counterfeit viagra and depending on applicants. Interest rate does not already fits viagra buy no prescription cures for ed into these new one. Why let them whenever they must visit the right on levitra and zoloft ginseng erectile dysfunction staff is beneficial if that some collateral. Pleased that its value of mind to lose their families levitra webster university film series is ed reversible into these lenders to include your fingertips. Just fill out cash to working for example if wwwpaydayloancom.com cialis generic online that their trust into these offers. Not only make alternative is within your viagrapharmacyau.com 50 mg viagra questions that tough spot. Just pouring gasoline on you will answer cialis online erectile dysfunction therapy the paycheck has money fast? We work when consumers having money will become levitra viagra vs viagra jelly an experienced representative to get. Citizen at an apartment or an effect on day levitra treatment for erectile dysfunction just by use a steady income. Applying online with you just embarrassing like instant levitra online viagra pharmacy loans all your interest penalties. Be aware of being foreclosed on when more viagra cheap buy pfizer viagra personal fact even custom loans. Looking for unsecured and go at an alternative methods same day cash advance viagra 100 mg to validate your bank fees involved whatsoever. Emergencies happen such it for years depending http://cashadvance8online.com sales viagra on day a problem. Extending the military may mean additional information and relax viagra online no prescription viagra online no prescription while paying back into their luck.

In addition, a recent survey conducted by the Ponemon Institute determined that “the average cost of data center downtime across industries was approximately $7,900 per minute (a 41-percent increase from the $5,600 in 2010).”[ii] VentureBeat editor-in-chief Dylan Tweney calculated Google lost $545,000 when an outage affecting all of their online services went down for 5 minutes in August 2013.[iii]

With an increasingly larger portion of business being transacted online, the impact of downtime on a company’s bottom line can be significant. With the move to Cloud and SaaS delivery models, both customer-facing applications and an organization’s entire IT infrastructure are at risk. Moreover, a worst-case scenario would be different teams running different tools for each layer – a sort of “anti-DevOps” approach. No doubt, there are still siloed IT organizations where multiple, redundant and incompatible tools are being deployed.

Chris Wolf, a research vice president at Gartner, in a recent posting states, “One of the great myths today is that there is all of this centralized hybrid Cloud management happening – for the most part, it doesn’t exist in terms of what folks are actually doing. In nearly every case where we see a hybrid Cloud environment, the customer is using separate sets of tools to manage its public and private Cloud environments. There just truly isn’t a ‘single pane of glass’ today; that’s the problem.”[iv]

While Wolf’s analysis might well reflect the current state of enterprise Cloud, there is no doubt that organizations building their own Cloud environments will surely look to standardize on automation and orchestration tools and processes that offer the most flexibility and speed of deployment capabilities while maintaining alignment with business goals. GigaSpaces developed Cloudify specifically to address these critical end-user requirements.

Cloud Journey

Cloudify enables customers to onboard and scale any app, on any Cloud, with no code changes, while maintaining full visibility and control.

GigaSpaces CTO Nati Shalom, in a recent blog post entitled “Eight Cloud and Big Data Predictions for 2014”, suggests, “Orchestration and automation will be the next big thing in 2014. Having said all that, the remaining challenge of enterprises is to break the IT bottleneck. This bottleneck is created by IT-centric decision-making processes, a.k.a. ‘IaaS First Approach,’ in which IT is focused on building a private Cloud infrastructure – a process that takes much longer than anticipated when compared with a more business/application-centric approach.”

In addition, Shalom states, “One of the ways to overcome that challenge is to abstract the infrastructure and allow other departments within the organization to take a parallel path towards the Cloud, while ensuring future compatibility with new development in the IT-led infrastructure. Configuration management, orchestration and workflow automation become key enablers in enterprise transition to Cloud, and will gain much attention in 2014.”

The Emergence of DevOps

Shalom also points to the emergence of DevOps, a combination of development and IT operations, as a “key example for a business-led initiative that determines the speed of innovation and, thus, competitiveness of many organizations. The move to DevOps forces many organizations to go through both cultural and technology changes in making the business and application more closely aligned, not just in goals, but in processes and tools as well.”

Cloudify Automation Slide

Cloudify Unifies the Cloud Stack

Cloudify is a Cloud orchestration platform developed by GigaSpaces that allows any application to run on any Cloud, public or private, with no code changes. More than three years ago, GigaSpaces anticipated the need for higher level tools to accelerate the onboarding and migration of mission-critical applications to the Cloud.

Since its release, Cloudify has been adopted by many large organizations – including multi-national financial institutions – as a de facto standard for bridging the gap between the IaaS layer and the application layer. With Cloudify, it is now possible to adopt a Cloud automation and orchestration framework that provides IT organizations, systems integrators, software developers and application owners with the ability to quickly deploy applications securely with standard interfaces (APIs).

Cloudify is designed to bring any app to any Cloud, enabling enterprises, ISVs, and managed service providers alike to quickly benefit from the Cloud automation and elasticity that organizations need today. Cloudify helps users maximize application onboarding and automation by externally orchestrating the application deployment and runtime.

Cloudify’s DevOps approach treats infrastructure as code, enabling users to describe deployment and post-deployment steps for any application through an external blueprint, which users can then take from Cloud to Cloud, unchanged. 

Cloudify is now available as an open source solution under the Apache license agreement or through GigaSpaces directly for those organizations looking for a premium services package. Cloudify and GigaSpaces work with many other open source Cloud automation and orchestration tools such as OpenStack’s Heat as well as Chef and Puppet. Cloudify also enables applications migrating to and from OpenStack, HP’s Cloud Services, Rackspace, AWS, CloudStack, Microsoft Azure and VMWare.

TOSCA (Topology and Orchestration Specification for Cloud Applications) from OASIS is an open source specification that works to “enhance the portability of Cloud applications and services.” The goal of TOSCA is to enable cross-Cloud, cross-tools orchestration of applications on the Cloud. In the 3.0 version of Cloudify, GigaSpaces is working on putting TOSCA into the mix and using its concepts as a canonical application model.

Cloudify uses an orchestration plan, or blueprint, that is inspired by TOSCA. The blueprint contains an application topology model: IaaS components, middleware components and application components. For each of these elements, Cloudify describes the component lifecycle and dependencies with other components (dubbed relationships). In addition, each node defines a set of policies that allow Cloudify to enforce application availability and health.

Cloudify translates these topologies into real, managed installations by running automation processes described in the blueprint workflows. These workflows trigger the lifecycle operations implemented by the Cloudify plugin, which uses different Cloud APIs as well as tools such as Chef, Puppet and others.

OpenStack: The Open Source Alternative to AWS

“Deploy, manage and scale” is a mantra for rapid application delivery in the Cloud. As previously mentioned, AWS and other CSPs accomplish this through standardization on a single IaaS platform. Three years ago, AWS competitor Rackspace, along with NASA, introduced the OpenStack initiative – essentially as an open source alternative to AWS – which now has more than 50 IT vendors and CSPs actively supporting and participating in the community, including AT&T, Cisco, Dell, HP, IBM, Intel, NetApp, Red Hat, Suse, VMware and Yahoo!.

OpenStack is a Cloud operating system acting as an IaaS platform to control large pools of compute, storage and networking resources throughout a data center. Released in September 2012 as open source software under the Apache license, OpenStack has become the most deployed IaaS platform in the world, embraced by thousands of service providers, government agencies, non-profit organizations and multinational corporations. The OpenStack community just announced the 8th release of its Havana software for building and supporting public, private and hybrid Cloud application infrastructure.

A recent survey conducted for Red Hat by IDG Connect shows 84 percent of enterprise IT decision makers surveyed say that OpenStack is part of future private Cloud plans. “Sixty percent of survey respondents indicated they are in the early stages of their OpenStack deployments, and have not yet either completed the implementation stage or are early in the process. Survey respondents cited management visibility (73 %); deployment speed (72 %); platform flexibility (69 %); better agility (69 %); and competitive advantage (67 %) as the unique benefits offered by OpenStack over private Cloud alternatives.”[v]

In early 2013, to keep pace with the innovation coming out of the open source Cloud community, Amazon introduced OpsWorks to provide AWS customers with a “powerful end-to-end platform that gives you an easy way to manage applications of nearly any scale and complexity without sacrificing control.” While OpsWorks is not an open source product, so far Amazon does not charge extra for it. OpsWorks does however support a few open source solutions such as Chef version 11 and open source development languages such as Java, PHP and Ruby on Rails along with open source databases MySQL and Memcached.

It remains to be seen whether a wide swath of customers will embrace OpsWorks – even within the AWS framework – because a dedicated, proprietary AWS solution that locks companies and their applications into the Amazon Cloud will likely have limited appeal for many organizations. Large enterprises and some SMBs not only need a solution that manages Cloud infrastructure resources and interactions with users, they also need portability across private and hybrid (private/public) Cloud platforms.

Cloudify User Insights

A large multinational financial services company that has implemented Cloudify offers the following insights.

  • Setup usually takes quite a bit of time when there are no best practices and no structure to follow. Cloudify allowed us to model this process in a very well defined way using application blueprints. When dealing with hundreds or even thousands of applications that we might want to migrate to the Cloud, this can be quite a timesaver.
  • Regarding ongoing operations, when an application or component fails, there is the cost of downtime and the cost of bringing the application back up again. Cloudify does a few things for us to avoid such situations.
  • Cloudify supports proactively scaling out an application when load increases, thus avoiding downtimes caused by overload.
  • Cloudify enables auto-healing application components upon failure, thus minimizing the Mean Time to Recovery (MTTR) when an application fails.
  • For enterprises like ours, with private Clouds, the ability for each application to consume only the amount of resources that it needs at any given point in time means significant improvement in our data center utilization and cost savings.
  • Development and testing in many cases requires replicating complete environments, which is very costly without a good automation framework. Cloudify allowed us to set up an entire application with all of its supporting resources at the click of a button. In fact, we have a few users that are using Cloudify only for that.

Conclusion 

There is no doubt that enterprises are looking for additional deployment support when it comes to implementing private and public Cloud solutions. The benefits of freeing up internal infrastructure and enabling DevOps to bring applications to market more quickly are unquestionably real. What is not so clear is which path to the Cloud will provide the least friction and the fastest time to value.

Cloudify is winning converts in the banking, finance, retail, hospitality and telecom industries as well as gaining credibility with partners such as IBM, HP and Alcatel-Lucent when customers require native OpenStack and multi-Cloud support. A framework and approach that touches all the Cloud layers unifying the Cloud stack while simultaneously simplifying DevOps is a compelling combination of capabilities.

Enterprises looking to “Cloudify” their mission-critical applications will be hard pressed to find a more comprehensive, innovative, intuitive, open-source, community-supported solution than GigaSpaces has created in Cloudify.

This post is an excerpt from a whitepaper entitled Cloudify from GigaSpaces: Delivering Mission-Critical Applications to the Cloud at the Speed of Business. The full report can be viewed through this link


Posted in Big Data, Cloud Computing, Information Management Thought Leadership | Tagged , , , , , , , , , , , | Leave a comment

Legal Tech Memo 2014: Driving Efficiencies and Mitigating Risk in the Era of Big Data

Dilbert on Big DataNow that the Information Governance message has finally gotten through to most every CIO and General Council, not to mention the vendor community, what’s next?

Over successive Legal Tech’s (this year’s LTNY 2014  was my sixth in a row), information management related technology and services have gotten progressively more sophisticated.  Some might argue also easier to use and integrate into the enterprise.

Meanwhile Technology Assisted Review (TAR), Predictive Coding and Early Case Assessment (ECA) have all become standard tools in the ediscovery arsenal – and accepted by most courts today as viable options to a heretofore much more labor intensive and costly manual review process.

In the right hands, this new generation of ediscovery/information governance tools can be leveraged throughout an enterprise corporate legal department and beyond to other departments including compliance, records management, risk management, marketing, finance and lines of business.

However, one of the great challenges for any large corporation or organization today remains how best to address the deluge of Big Data. With a variety of technology and service provider offerings and methodologies to choose from, what is the best approach?

Big Data Comes of Age

In 2009, Analytics, Big Data and the Cloud were emerging trends. Five years later, Big Data is an integral part of the business fabric of every information-centric enterprise. Analytic tools and their variants are essential to managing Big Data.

The Cloud is fast becoming the primary Big Data transportation vehicle connecting individuals and organizations to vast amounts of compute power and data storage – all optimized to support a myriad of popular applications from Amazon, Facebook, LinkedIn and Twitter to so-called Software as a Service (SaaS) applications such as Salesforce.com and Google Analytics.

At the same time, eDiscovery requests have increased dramatically. Better tools along with more data are a recipe for a spike in data access requests whether triggered by outside regulatory agencies, an increase number of lawsuits or internal requests from across the enterprise to leverage electronic data originally captured to meet regulatory compliance requirements.

Big Data Spike in Financial Services: Example

A Fortune 500 financial services firm I am familiar with has seen the number of internal ediscovery requests jump from 400 a year, in 2004, to 400 a month in 2013 – and counting. While changes to FRCP  helped to accelerate the pace of ediscovery activities, the firm offers variable annuities and mutual funds; therefore, the SEC also regulates them.

Thus, the company is required by Rule 17a-4 to retain every electronic interaction between their 12,000 agents and their client/prospects. Every month or so, they add 100 million more “objects” to their content archive which now exceeds 3 billion emails, documents, instant messages, tweets and other forms of social media.

From this collection of Big Data, larger highly regulated firms, in particular, are compelled to carve out portions of data for a variety of purposes. Most of these larger organizations have learned to rely on a mix of in-house technologies and service providers to analyze, categorize, collect, cull, move, secure and store mostly unstructured or semi-structured data, such as documents and emails, that do not fit neatly into the structured world of traditional relational database management systems.

People and Process First, Then Technology

A recent ZDNet article on Analytics quoting Gartner analysts suggests, “Easy-to-use tools does not mean it leads to better decisions; if you don’t know what you’re doing, it can lead to spectacular failure. Supporting this point, Gartner predicts that by 2016, 25 percent of organizations using consumer data will face reputation damage due to inadequate understanding of information trust issues.”

In other words, great technology in inexperienced hands can lead to big trouble including privacy breaches, brand damage, litigation exposure and higher costs. When meeting ediscovery demands is the primary goal, most large organizations have concluded that acquiring the right combination of in-house technologies and outside services that offer specific subject matter expertise and proven process skills is the best strategy for reducing costs and data volumes.

Responsive Data vs. Data Volume: Paying by the Gig

The time-honored practice of paying to store ESI (electronically stored information) by volume in the age of Big Data is a budget buster and non-starter for many organizations. Most on-premise ediscovery tools and appliances as well as Cloud-based solutions have a data-by-volume pricing component. Therefore, it makes perfect sense to lower ESI or data volumes to lower costs.

However, organizations run the risk of deleting potentially valuable information or spoliation of potentially responsive data relevant to a litigation or regulatory inquiry. This retain or delete dilemma favors service and solution providers whose primary focus is offering tactical, put-out-the-fire approaches to data management that worked well five or more years ago but not in today’s Big Data environment.

Big Data Volumes Driving Management Innovation

The ediscovery.com pulse benchmarks launched last November by Kroll Ontrack indicate, “Since 2008, the average number of source gigabytes per project has declined by nearly 50 gigabytes due to more robust tools to select what is collected for ediscovery processing and review.”  In Kroll’s words, the decline is the result of “Smarter technology, savvier litigants and smaller collections.”

But, technology innovation and smarter litigants only tells a portion of the story. The larger picture was revealed during LTNY’s Managing Big Data Track panel sessions moderated by UnitedLex President Dave Deppe, and Jason Straight, UnitedLex’s Chief Privacy Officer.

Deppe’s panel included senior litigation support staff from GE and Google. Several key takeaways from the session include:

  • A long-term strategy for managing the litigation lifecycle is critical and goes well beyond deploying ediscovery tools and services. Acquiring outside subject matter expertise to mitigate internal litigation support bandwidth issues and provide defensibility through proven processes is key.
  • Price is not the mitigating factor in selecting service providers. Efficiency, quality and a comprehensive, end-to-end approach to litigation lifecycle management are more important.
  • The relationship between inside and outside council and providers needs to be managed effectively. Good people (SMEs) and process can move the dial farther than technology, which often does not make a difference – especially in the wrong hands.
  • Building a litigation support team that includes a panel of key partners, directed by a senior member of the internal legal staff can dramatically influence ediscovery outcomes and help lower downstream costs.  Aggressively negotiating search terms and, as a result, influencing outside counsel is just one example.
  • When not enough attention is paid to search terms, 60 to 95 percent of documents are responsive. Litigation support teams must stop reacting and arm outside counsel with the right search terms to obtain the right results.
  • Over collecting creates problems downstream. Conduct interviews with data owners and custodians. Ask them what data should be on hold. They will likely know.
  • Define, refine and repeat the process, train others and keep it in place.
  • Develop litigation readiness protocols, come up with a plan and play by the rules.

Merging of eDiscovery and Data Security

The focal point of Straight’s panel was on why and how organizations should develop a “Risk-based approach to cyber security threats.” With the advent of Cloud computing, sensitive data is at risk from both internal and external threats.

Panelist Ted Kobus, National Co-leader Privacy and Data Protection, at law firm Baker Hostetler shared his concerns about the possibility of cyber-terrorists shutting down critical infrastructure such as power plants. In regards to ediscovery, law firms often have digital links to client file systems which, if compromised could leak sensitive data, intellectual property or give hackers access to customer records.

In light of recent very high profile cyber breaches suffered by Target Stores and others, Kobus and other panelists emphasized the need to develop an “Incident Response Plan” that includes stakeholders from across the enterprise beyond just IT including; legal, compliance, HR, operations, brand management, marketing and sales.

Kobus emphasized that management needs to “embrace a culture of securing data” as a key component of an enterprise’s Big Data strategy. As the slide below indicates, a risk-based approach to managing corporate data addresses simple but critical questions.

UnitedLex LTNY 2014 Cyber Risk Panel Slide

Many organizations have created a Chief Data or Information Governance Officer position responsible for determining how various stakeholders throughout the enterprise are using data, and cyber insurance is becoming much more popular. Big Data management, compliance and data security are intrinsically connected. Moreover, the development of a data plan is critical to the survival of many corporations and its importance must not be overlooked or diminished.

Big Data Innovators to Watch in 2014 – and Beyond 

UnitedLex:  It is relatively easy to make the argument that it takes more than technology to innovate. UnitedLex has grown rapidly on the merits of its “Complete Litigation Lifecycle Management” solution, which provides a “unified litigation solution that is consultative in nature, provides a legally defensible, high quality process and can drive low cost through technology and global delivery centers.”

UnitedLex Domain Experts leverage best of breed ediscovery tools such as kCura’s Relativity for document review as well as their own Questio consultant led, technology-enabled service that “combines targeted automation and data analysis expertise to intelligently reduce data and significantly reduce cost and risk while dramatically improving the timeliness and efficiency of data analysis and document review.”

UnitedLex has reason to believe its Questio service is “materially changing the way eDiscovery is practiced” because UnitedLex SMEs help to significantly reduce risk and, on average, reduce customers’ total project cost (TPC) by 40 to 50% or more. This “change” is primarily achieved by reducing data volumes, avoiding legal sanctions, securely hosting data, strict adherence to jurisdictional and ethics requirements and, most or all, through the development of a true litigation and risk partnership with its customers.

How Questio Reduces Risk

Questio chart_collection

61% of eDiscovery-related sanctions caused by a failure to identify and preserve potentially responsive data. (UnitedLex)

Vound:  In the words of CTO Peter Mercer, Vound provides a “Forensic Search tool that allows corporations to focus on the 99 percent of cases not going to court.” Targeting corporate council and internal audit, Vound’s Intella is “a powerful process, search and analysis tool that enables customers to easily find critical data. All products feature our unique ‘cluster map’ to visualize relevant relationships and drill down to the most pertinent evidence.” Intella can be installed on a laptop or in the cloud.

Intella works almost exclusively with unstructured and semi-structured data such as documents, emails and metadata dividing data into “facets” using a predefined, in-line multi-classification scoring scheme. The solution is used by forensic and crime analysts as well as ediscovery experts to “put the big picture together” and bridge the gap that exists between Big Data (too much data) and ediscovery (reduce size of relevant data sets) to manage risk by identifying fraud patterns, improve efficiencies by enabling early assessments and lowering cost.

Intella slide

 

Catalyst:  Insight is a “revolutionary new ediscovery platform from Catalyst, a pioneer in secure, cloud-based discovery. Engineered from the ground up for the demanding requirements of even the biggest legal matters, Insight is the first to harness the power of an XML engine—combining metadata, tags and text in a unified, searchable data store.”

According to Founder and CEO John Tredennick, Catalyst has deployed at least three NoSQL databases on the backend to offer Insight users “unprecedented speed, visual analytics and ‘no limits’ scalability, all delivered securely from the cloud in an elegant and simple-to-use interface—without the costs, complications or compromises of an e-discovery appliance.”

In addition, Catalyst has engineered ACID transaction, dynamic faceted search, specialized caching techniques and relational-like join capabilities into Insight in order to deliver a reliable, fast and easy to use solution that enables results, from raw data to review, “in minutes not days” – even with large document sets exceeding 10 million.

Insight-dashboard-overview

 

Conclusion

The above-mentioned trio of services/solution providers represent a sea change in the way corporations big and small will strategically approach ediscovery in the era of Big Data and Cloud computing.  Deploying tactical solutions that meet short-term goals leads to higher costs and increase risks.

Services and technology solutions that can be leveraged across the enterprise by a variety of data stakeholders is a much more logical and cost effective approach for today’s business climate than deploying point solutions that meet only departmental needs.

A risk-based approach to managing Big Data assets throughout the enterprise – including customer and partner facing data – is not only a good idea, an organization’s survival may depend on it.

Posted in Big Data, Cloud Computing, Information Governance, Information Management Thought Leadership, Information Management Trends, Strategic Information Management | Tagged , , , , , , , , , , , , , , , | Leave a comment

Making the Case for Affordable, Integrated Healthcare Data Repositories and PHRs

Healthcare Data MapTackling the rising cost and complexity of healthcare delivery in the U.S. and, increasingly, around the world while improving health outcomes is one of the great challenges of modern civilization. This is the primary mission of the Affordable Care Act (ACA), better known as ObamaCare, which is funded by ARRA, The American Recovery and Reinvestment Act of 2009.

Far from being an exact science, the practice of medicine is highly specialized and compartmentalized – unlike human beings who are an amalgam of interconnected and related physical systems, emotions and thoughts. Today, most of us humans find ourselves at the nexus of the healthcare delivery and management debate, and healthcare data is an integral part of the discussion.

Navigating today’s complicated healthcare ecosystem and the nuances of ACA demands that individuals take more responsibility for managing their own and their family’s healthcare services. This includes selecting a variety of healthcare professional partners who will help guide us through our health and wellness journey so we can receive the best possible health care advice and services. Healthcare data is undeniably one of those partners.

ACA promises to increase access for individuals to higher quality information regarding the efficacy of providers, procedures, medical research, case studies, outcomes data and comparative cost data previously reserved for “experts” only.

Now, due to changes in laws governing the ownership and access to healthcare records and thanks to advances in electronic data collection and analytics, laymen have the right, and the means, to review and manage their own and their family’s personal healthcare records (PHR) and view aggregated or “cleansed” healthcare data that may support better care and help improve outcomes.

Unfortunately, much of our collective potentially useful healthcare data is still locked away in paper records or inaccessible data formats within provider archives and siloed computer systems despite the fact that technology to access or “crack” these formats has been commercially available for more than a decade.

In addition, the vast majority of hospitals, physician groups and other providers have been slow to adopt these solutions or they have invested in older technology that makes the data extraction problematic or prohibitively expensive. Privacy and security concerns are also cited by those who hold or “curate” our personal health data as justification for delays in promoting potentially useful data to individuals and researchers.

At the same time, personal healthcare records advocates such as Patient Privacy Rights.Org decry the loss of individual anonymity as PHRs are legally, and illegally, resold to “thousands” of healthcare analytics companies for purposes far beyond improving health outcomes.

In addition, studies of electronic health records (EHR) solutions, including a withering article in the New England Journal of Medicine entitled Escaping the EHR Trap, suggest EHR solutions are overpriced, inefficient and EHR solutions vendors selfishly are fostering stagnation in healthcare IT innovation.

Meanwhile, ACA is allocating roughly $19 billion for hospitals to modernize their medical records systems encouraging the adoption of technologies that are often 20 or more years out of date compared with technology adoption curves in several other industries including finance, ecommerce, manufacturing and even government agencies.

Whether or not individuals and providers are fully aware of the ramifications born of the healthcare Big Data explosion, the industry is crying out for help to resolve critical issues at the center of the controversy including; tackling security; fraud and transparency concerns; data portability and ownership; using healthcare data exclusively to improve outcomes; and applying the brakes to escalating costs.

The Changing Healthcare Landscape

Dr. Toby Cosgrove, CEO of the Cleveland Clinic, recently remarked at the 2014 World Economic Forum in Davos, Switzerland, “Now, healthcare is more of a team sport than an individual sport. Between doctors, nurses, technicians, IT people and others, these days it takes a whole team of people working together across specialties.” (Huff Post Live at Davos)

At the center of the team is the individual or the patient advocate – most often a family member such as a parent, child or sibling – who is responsible for orchestrating a legion of healthcare providers and technicians that might also include nutritionists, physical therapists, wellness advisors, and alternative medicine practitioners such as herbalists or chiropractors.

Also weighing in from Davos, Mark Bertolini, CEO of Aetna, the third largest health insurer in the US, says, “Healthcare costs are out of control. We really need to look at how health care is delivered and how we pay for it. Today, we pay for each piece of work done and so we get a lot of pieces of work done.” Bertolini points out that Americans are gaining more control and more responsibility for their medical bills with individuals paying about 40% of costs through premiums, deductibles and other charges.

Today, healthcare providers are relying more than ever on data derived from multiple sources to supplement traditional modes of diagnoses and care. Much of this data, useful to providers, payers and individuals, is stored on paper records but also increasingly in electronic form in a variety of “data silos” across the healthcare continuum.

The integration of these data silos to gain a holistic view of each individual’s historical healthcare record while, in the process, also achieving an aggregated view of health populations holds great promise for contributing to improved healthcare outcomes and overall lower costs. This integration and secure portability of health records is one of the primary challenges for the ACA.

The aforementioned $19 billion is being allocated for Medicare and Medicaid electronic health records (EHR) Incentive Programs to encourage eligible providers (EPs) to update their computer systems in order to demonstrate “meaningful use” (MU) of healthcare technology that meets a variety of “core objectives” including; keeping up to date patient medication and allergy histories, consolidating personal health records and demonstrating the ability to securely transmit EHRs to patients, other providers and health information exchanges (HIEs).

The MU program is administered by the Office of the National Coordinator (ONC) for Health Information Technology (HITECH). Most of the money is earmarked for EPs such as hospitals, physician groups, HIEs and other EPs.

One of the 17 Stage 2 core objectives of the MU EHR incentive program for 2014-15 is to “Provide patients with an electronic copy of their health information (including diagnostic test results, a problem list, medication lists, and medication allergies) upon request.”

On the face of it, the objective of Measure 12 is simple. In practice, there are very few hospitals today that can comply with the letter of the law and therefore are in jeopardy of not meeting HIPAA requirements and losing future meaningful use incentive dollars – allocated in stages over several years.

Here is a link to an ONC document that outlines all of the Core Objectives for Stage 2 MU.

What is the Law?

HIPAA laws have been strengthened over the last decade to enforce the rights of individuals and strongly encourage HIPAA compliance from providers, payers, employers and other covered entities where HIPAA compliance is required. The following link HIPAA PHR and Privacy Rules outlines an individual’s right to access their electronic records.

Later this year, amendments to HIPAA privacy rules will go into effect that provide individuals greater ability to access lab reports, further “empowering them to take a more active role in managing their health and health care,” according to the rule.

HIPAA rules have also been amended to provide individuals and government agencies with recourse if HIPAA security is breached or if personal health records are not made available in a reasonable timeframe, usually within several business days. A California based privacy group details the types of PHRs, what laws protect individual rights and examples of fines that have been recently levied on health providers that do not comply with HIPAA regulations. One health insurer incurred a $1.5  million fine while a cardiac surgery group was fined $100,000 for not properly implementing HIPAA safeguards.

On the flip side of the argument, Dr.  Deborah C. Peel, Founder and Chair of Patient Privacy Rights.org, believes HIPAA rules have actually been weakened. In recent “testimony” addressed to Jacob Reider, MD, National Coordinator for Health Information Technology at the ONC, Dr. Peel articulates her concerns about the widespread practice by analytics companies, payers and providers of Patient Matching – a technique used to exchange U.S. health data without patient involvement or consent.

In part, Dr. Peel asks, “How can institutions exchange sensitive health data without patient participation or knowledge?” Apparently relatively easily.

Healthcare Data Map

A “live” Data Map of the above graphic was developed in cooperation with Harvard University.

At present, $ billions of MU dollars are still available for EPs to support adoption of EHRs and electronic medical records (EMRs) solutions. However, by 2016, The Centers for Medicare & Medicaid Services (CMS) will withhold money from providers who do not comply with MU requirements. CMS has already stopped paying for what it characterizes as avoidable readmissions for congestive heart failure (CHF), Acute Myocardial Infarction (AMI) and Pneumonia (PN).

CMS is also finalizing the expansion of the applicable conditions for 2015 to include; acute exacerbation of chronic obstructive pulmonary disease (COPD), patients admitted for elective total hip arthroplasty (THA) and total knee arthroplasty (TKA).

Value of EHRs and PHRs

As indicated in this health records infographic (also seen below), EHRs and personal health records (PHRs) are becoming more valuable to providers and individuals with more healthcare data available on line and technology advances to leverage the data in multiple ways.

Healthcare Infographic onc_consumer_task-6.3_infographic_final

More than 10% of smart phone users have downloaded an app to help them track or manage their healthcare services and 2 out of 3 people said they would consider switching to providers who offered access to their health records through the Internet.  (Even more reason to place control of data in the hands of those that will use it to benefit the individual.)

Better access to health information has many benefits including; less paperwork and easy access to records, better coordination of care across providers, faster more accurate filling of prescriptions and fewer unnecessary and duplicative tests that inflate costs or involve some risk.

Ultimately, EHRs and PHRs offer the individual better control over their healthcare experience and give caregivers additional information to improve their quality of service.

PHR services such as Microsoft’s Health Vault – along with 50 plus other non-profit and for profit PHR related services including Medicare’s own Blue Button PHR service – offer users a way to collect, update, store and selectively transmit medical records to providers or anywhere they choose. Medicare (CMS) has opened its Blue Button format to encourage “data holders” and software developers to adopt its Blue Button Plus (BB+) framework, which offers a “human-readable format and machine-readable format.”

Despite security concerns and potential loss of anonymity, millions of Americans have reasoned that capturing and sharing their personal health data has some value. Analytics firm IMS Health evaluated over 43,000 health related apps available for Apple smart phones alone. While IMS concluded that only a small number of apps were “useful” or engaging, the explosion in mobile health apps is only one indication of consumer interest in using health data to modify lifestyles in order to improve health.

While large health insurers such as Aetna and United Health Group have, on the surface, bought into the BB+ initiative, early indications are that usage is less than 1% out of a potential pool of roughly 100 million U.S. citizens. That pool includes health insurance companies, health information exchanges (HIEs) and the Veterans Administration.

One could argue the BB+ program is new and not yet well publicized or understood. For those of us who have actually signed up for PHR services and tried to use them, the bigger problem is likely poor user interfaces and an overall lackluster customer experience leading to little motivation for engagement.

Barriers to EHR and PHR Adoption

There are several factors slowing the widespread adoption of electronic healthcare records by providers and individuals including politics, education, cost, transparency, flawed technology and workflow, and lack of engagement and innovation.

Politics

Until recently, there was no clear statement from lawmakers on ownership of personal health records or any teeth to enforce HIPAA security standards. Providers, payers, pharmaceutical companies, analytics firms as well as government agencies who possessed EHR data “owned” the data. Even as the government has declared individuals own their data, the value of EHRs to the entire healthcare ecosystem has increased exponentially.

The more providers move to EHR solutions, the more coveted that data has become to support research, drug trials, population health, fraud detection, supply chain management, clinical informatics start-ups, venture capitalists and other lucrative data analytics related businesses.

Education

Those members of the healthcare ecosystem who are benefiting financially from reselling de-identified, aggregated or enhanced EHR data usually prefer not to publicize their windfall. Publicly, the ecosystem prefers to focus on the value to individuals – of which there is potentially much benefit. However, the financial benefit does not easily trickle down to individuals or even to most caregivers.

With the individual in control of their own data, the potential for dramatically improving the accuracy and efficacy of individual and aggregated data is enormous. There should be some financial benefit to the individual in the form of lower healthcare costs, reduced prescription drug costs and healthcare insurance rebates for accurately gathering and managing PHR data.

Cost

Most hospitals are spending $ millions on migrating older EHR or electronic medical records (EMR) systems to MU “certified” solutions. However, MU incentives only pay a fraction of the cost of these systems. For example, a single practitioner or EP will get less than $50,000 for complying with MU, which is likely less than the initial cost of a MU certified solution for the first year or so.

In comparison, Kaiser Permanente, one of the country’s largest providers, has 17,000 doctors. With Kaiser receiving $50,000 for each EP, the total MU incentive reimbursement would be a whopping $850 million. Unfortunately, Kaiser has purportedly spent over $3 billion on their EHR upgrade – and counting.

Up to this point, MU incentives have not required providers to link or integrate all of their internal IT and hospital systems. Given that most providers are falling behind with existing MU incentive objectives, total integration is not on most providers to do list as of yet. Kaiser and other leading edge health systems such as InterMountain and the Mayo Clinic have largely completed their integration but at a very high cost.

Transparency

Despite MU requirements that require more transparency to meet quality objectives for reimbursements, some providers may be reluctant to disclose all patient information to individuals including detailed, consolidated procedure and billing information. This level of detail may expose overuse of certain procedures or a pattern of overcharging for services.

Individuals may also not want certain procedures (liposuction or AIDs testing) or lifestyle choices (drinking in excess or drug abuse) recorded in their charts for posterity. Security in general is an issue for most providers as their systems and IT managers may not have access to the most up to date security solutions. Most providers are far from leading edge when it comes to data security.

Flawed Healthcare Technology and Workflow

As the saying goes, “In the land of the blind, the one eyed man is king.” The scramble to adopt certified EHR solutions to qualify for MU incentives has an unfortunate consequence; the bulk of the EHR solutions have severe limitations starting with the lack of interoperability.

For instance, a patient may be admitted to the emergency room, then be sent to intensive care, followed by x-rays, go for surgery and then follow up a week later as an outpatient in the doctor’s office. In most cases, the EHRs are supplied and supported by different software vendors supporting different record formats. If the patient has a nutritional component or rehab is required, those visits might also be recorded in different systems.

Doctors complain that using EMR solutions has turned them into data entry clerks. Beyond diagnostic and billing codes, there is often no standard nomenclature for some diseases or ailments. Meanwhile, EHR vendors claim their solutions are capable of being the “System of Record” or the central repository for all of a patient’s consolidated records.

This recent article from Medical Economics makes the case for why there is such an outcry from physicians over the poor functionality and high costs of EHRs. A study referenced in the article found that 2/3rds of doctors would not purchase the same EHR again. Doctors also complained of lost efficiency and the need for additional staff just to manage the new EHRs as well as negative impacts on the quality of patient care.

Experience has demonstrated that the more popular EMRs are good at billing, supporting some workflows, basic reporting and collecting data from some other systems to include in or attach to a patient’s chart. However, no EMR solution has adequately demonstrated that it can function as an integrated data repository while sustaining the high speed and volumes required for clinical decision support systems, analytics and medical informatics solutions that are fast approaching the multiple hundreds of terabytes range and require sub-second response times.

Dr. John Halamka, who writes the Geek Doctor Blog and is the CIO of one of the few remaining hospitals in Eastern Massachusetts that is not migrating its EHR/EMR to Epic (the previously mentioned “one eyed man”), compared his plight to the final scene in the movie Invasion of the Body Snatchers. At times, in the era of Epic, I feel that screams to join the Epic bandwagon are directed at me.”

Halamka adds, “The next few years will be interesting to watch. Will a competitor to Epic emerge with agile, cloud hosted, thin client features such as Athenahealth?  Will Epic’s total cost of ownership become an issue for struggling hospitals?  Will the fact that Epic uses Visual Basic and has been slow to adopt mobile and web-based approaches prove to be a liability?”

On its “about” page, Epic touts its “One Database” approach;All Epic software was developed in-house and shares a single patient-centric database.” That one database, referred to by its unfortunate acronym, MUMPS, was developed in the 1960s at Mass General. MUMPS and Epic have many critics who bristle at the idea of using “patient-centric” in the same sentence.

Blogs, such as Power Your Practice contend that EPIC and MUMPS are stifling innovation. “MUMPS and Interoperability: A number of industry professionals believe MUMPS will be weeded out as doctors and hospitals continue to implement electronic health records, namely because MUMPS-based systems don’t play nice with EHRs written in other languages. There is a reason why the Silicon Valley folks aren’t too fond of the language.

If MUMPS truncates communication between systems, then it hinders interoperability, a cornerstone of EHR adoption. One of the goals of health IT is to avoid insularity, so unless your practice or hospital’s goal is to adopt a client-server enterprise system with limited scalability – and you don’t care much for interoperability – MUMPS may be an option for you.”

Epic and MUMPS have their proponents – primarily its buyers and users at almost 300 hospitals that have collectively forked over many $ billions to help make Epic a healthcare EHR/EMR juggernaut. This Google plus thread started by Brian Ahier is replete with heated exchanges about MUMPS and Epic’s lack of interoperability.

Lack of Engagement and Innovation

Standards such as HL7 are only partly working as they still do not handle unstructured data very well, and the mania of EMR vendors, some physician organizations and HIEs to structure all EHRs is unrealistic. Worse yet, the idea that each individual’s health narrative can be reduced to a collection of stock answers and check boxes is just not based in reality.

Managing your health records should not be like pulling teeth. The PHR “experience” is tedious, lengthy and boring akin to filling out an application for health insurance or filling out a medical history at the doctor’s office. It is heavy data entry with little interaction from the app itself.

In addition, most PHR solutions offer only static results with no intelligent mapping across populations or help to gather an individual’s “publically available” or private records. Even CMS’ Blue Button has provider and billing codes that do not translate well for non-healthcare professionals and BB plans only to keep records for 3 years – not nearly long enough to track certain individual or health population trends and services.

As pointed out in the NEJM article referenced above, “Health IT vendors should adapt modern technologies wherever possible. Clinicians choosing products in order to participate in the Medicare and Medicaid EHR Incentive Programs should not be held hostage to EHRs that reduce their efficiency and strangle innovation.”

Time for Affordable Healthcare Big Data Management

Affordable healthcare and improving quality is the primary goal of ACA. Affordable healthcare data management should also be a top priority as providers and individuals need access to timely information.

Already smaller providers and hospitals are pressed for funds to meet new ACA requirements and existing solution providers seem intent on exacerbating the problem with overly expensive, poorly functioning solutions and services.

Individuals also need access to affordable or, one could argue, free data to support their own healthcare journey and the healthcare services needs of their extended families.

Healthcare solutions and services vendors as a whole – compared with solution providers in other industries – seem less engaged with newer technology advancements that could help drive dramatic cost reductions in IT services and solutions adoption or, for that matter vastly improve performance. It appears healthcare solutions buyers are too willing to settle for less.

Too many healthcare data management solutions that claim to tackle big data are doing so with old technology. While the financial industry has widely adopted standards such as XML, SWIFT for inter-bank financial transactions and Check 21 for imaging – not to mention embraced the Linux and open source community – healthcare still struggles with HL7 standards begun over 25 years ago.

Yes, XML is used in healthcare to allow some basic document transfer interoperability between EHR/EMR systems. However, vendors such as Epic use proprietary extensions to make the transfer more difficult outside their system customer base. And yes, some older technologies still work well. COBOL programs are still integral to many mainframe systems. Nevertheless, most of the newer, innovative web-scale systems were developed using open source tools developed or refined in the last decade or so.

Web retailers such as Amazon have bypassed traditional relational database technologies for open source based NoSQL databases that are more scalable, available and affordable. Airline reservation systems have run into relational database bottlenecks and are deploying real-time, in-memory databases. Traditional brick and mortar businesses such as banks and retailers have embraced cloud computing and security standards such as Kerberos,  SAML  and OpenID.

According to Shahid Shah, The Healthcare IT Guy, healthcare IT needs to consider industry neutral protocols and semantic data formats like RDF and RDFa. “Pulling data is easier. Semantic markup and tagging is easier than trying to deal with data trapped in legacy systems not built to share data.”

Shah is not alone in his assessment regarding the cost of maintaining legacy data management systems. Many studies suggest older technology costs users more money in the long run.  Here are a few examples:  Big Data and the problem with Relational Databases, Understanding Technology Costs, and Time to Pull the Plug on Relational Databases?

In addition, there is at least one non-profit organization that views healthcare technology interoperability as a priority. The West Health Institute believes improving healthcare device interoperability alone can save the industry $30 billion per year.

Excerpted from the “pull the plug” link above, “Former Federal CIO Vivek Kundra recently said, ‘This notion of thinking about data in a structured, relational database is dead. Some of the most valuable information is going to live in video, blogs, and audio, and it is going to be unstructured inherently.’ Modern, 21st century tools have evolved to tackle unstructured information, yet a huge majority of federal organizations continue to try and use relational databases to solve modern information challenges.”

The same can be said of the healthcare industry. Despite industry efforts to structure as much healthcare data as possible, the bulk of healthcare data will remain unstructured and the narrative of each individual’s healthcare record will be the richer for it.

The Way Forward for Healthcare Big Data Integration

The development of affordable healthcare solutions needs to be focused on supporting the two primary partners in the healthcare ecosystem: Individuals and providers.

Clearly, tackling the problem of integrating disparate data types gathered from multiple sources and organizing that data into a cogent, human readable format is a Big Data challenge that demands 21st century Big Data handling solutions.

It should be understood that EHRs were not designed to manage a variety of healthcare data formats. In addition, massive RDBMS multi-year data warehouse projects utilizing limited, structured data sets were fine for retrospective or financial reporting but do not work well with large volumes of unstructured data needed for real-time and predictive analytics.

The EHR and PHR are intrinsically interconnected and inseparable. Making it easier for providers and individuals to develop rich healthcare narratives and securely share information should be a top priority for the industry.

As with the early days of the open source movement, most solution providers were slow to envision how to monetize a free software product. EHR integration solutions are expensive and PHR solutions are unwieldy and lack imagination.

The open source model is built around a common community goal. The common goal for EHRs and PHRs is to lower costs and improve healthcare outcomes. Improving the quality of healthcare data by encouraging the primary sources of that data to collaborate greatly benefits both parties – partners in healthcare data management and ownership.

HDR and PHR Sample Scenarios

The following scenarios are examples of how providers and individuals can play a major role in gathering and enhancing healthcare data for the mutual benefit of both parties. Implementing available, tested, secure, scalable and affordable 21st century technology also plays a key role.

Integrated Healthcare Data Repository for Hospitals

The Challenge

Changes in the economics of healthcare delivery brought on by new laws enacted through the ACA/ObamaCare rollout and the advent of Big Data technologies are forcing hospitals to rethink their funding, workflow, supply chain management, services, staffing and IT strategies.

The result is most hospitals are struggling to implement newer technologies that can help lower costs, improve patient outcomes and buoy employee job satisfaction.

Larger non-profit and for-profit hospital groups have an advantage over stand-alone or smaller hospitals and physicians’ groups due to economies of scale and efficiency including; increased purchasing power and negotiation leverage; standardization across hospital IT systems; flexibility of a larger workforce; broader service offerings; and more revenue to justify healthcare IT upgrades, capital expenditures and recruitment or retention of key staff.

Also critical is the ability to access larger pools of data to help meet ACA performance metrics, CMS reimbursement requirements and determine the best treatment options for individuals and larger health population groups. Smaller provider groups and hospitals can access larger pools of data offered by newly formed analytics groups owned by large providers, payers or analytics groups – for a price – including Optum Health and Verisk Health.

Data “curated” by providers and hospitals is virtually owned by the application vendors. To paraphrase Shah, “Never build your data integration strategy with the EHR in the center. Create it with the EHR as a first class citizen. Focus on the real customer the patient.”

The Solution: An Integrated HDR

Using mostly open source components including a NoSQL database, analytics and cloud orchestration software, and also leveraging existing underutilized network, hardware and storage assets, even a smaller hospital group can affordably create an integrated healthcare data repository (HDR) to help meet compliance, regulatory, internal reporting and clinical information requirements – and much more.

Integrated Healthcare Data Repository HDR

Several established and emerging vendors are eager to conduct proof of concepts (POCs) and partner with motivated hospitals that would prefer to stay independent and are struggling to keep expenses under control. Even larger hospitals groups struggle with integration issues.

Here is a link to a report on 21 NoSQL Innovators many of whom have a footprint in healthcare and offer open source or affordable and proven database alternatives to traditional, expensive relational databases. Examples include; NoSQL segment market share leader MarkLogic 7, open source DB community leader MongoDB and Virtue-Desk’s “associative” database AtomicDB.

HDR Benefits

Savings and Cost Avoidance

  • Spend thousands not millions on integrating healthcare data
    • Liberate your data from HIT solutions silos, make data analytics-ready
    • Lower legacy IT infrastructure costs (data, storage and compute)
  • Avoid expensive software licenses, upgrades and maintenance costs
    • Offload data from expensive data warehousing systems
    • Improve efficiency of existing software systems
  • Help meet ACA and MU compliance requirements
    • Integrate EHR data and deliver HIPAA compliant PHRs to patients
    • Support clinical decision support systems and analyze outcomes data
    • Enable predictive analytics to avoid CMS penalties
    • Avoid HIPAA penalties by avoiding unauthorized information releases
    • Maintain patient privacy of sensitive data, e.g. psychiatric progress notes
  • Avoid time consuming manual workflows
    • Automatically access data for operations and supply chain management
    • Support clinical, research efforts with holistic data views, visualization
  • Recruit and retain high quality staff by implementing state of the art technology

Revenue Generation

  • Analyze data to determine best treatment modalities for patient population
    • Run analytics against holistic data sets to improve treatment choices
    • Discover and predict needs of local patient populations
    • Focus on higher value services within the community
  • Partner with other providers, payers, pharma on aggregated, anonymized data
    • Data to supplement clinical trials
    • Data to supplement payer wellness programs
    • Data to supplement academic medical center research
  • Recruit local clinics, providers, employers, HIEs with state of the art solution
    • Share/defer system costs with local health provider participation
    • Embrace clinically useful information such as cancer genomics
  • Develop a true data partnership with patients and their families
    • Encourage individuals to return to your facility when seeking care
    • Support patient-centric wellness and care programs to local employers

Personal Healthcare Records

The Challenge

The U.S. and most of the rest of the world is in the midst of an elemental transformation in the way healthcare services are delivered to individuals and their families. The cost of healthcare is rising in real terms along with the percentage of the total cost of healthcare for which individuals in the U.S. are responsible – due to higher insurance premium deductibles and vital health services and treatments falling outside of insurance plans’ basic coverage.

Now that we have established the inevitability of personal healthcare information being created and used, for better or worse, as well as resold in a variety of legal and illegal forms, there are three fundamental questions that need to be answered:

1-      What role do individuals play in helping to reform healthcare delivery?

2-      How do individuals move from passive to active participants in their care?

3-      How can individuals leverage or monetize their personal health information?

Answers to the questions above include:

  • Request or demand more information about your care including treatment options, cost prior to agreeing to procedures and electronic copies of all your records. Store PHR in a central location e.g., spreadsheet, email with attached images or with a PHR service.
  • Individuals need to assume an active role in managing their own and their family’s care. In an age of hyper-specialization, individuals should not be surprised if they develop knowledge of an illness or treatment more comprehensive than many caregivers possess. Look for ways to share the knowledge with others who can benefit from it e.g., through online forums or information exchanges.
  • Seek out new web-based services that exchange healthcare information for individuals. There are more than 50 PHR services available today that range in price from free to “concierge” level services that charge a monthly fee. Business models that allow individuals to monetize their own PHR at this point are a bit early in the game but not at all farfetched. Business models and services are changing rapidly. Expect changes and new services to appear – services that are engaging and have a quantifiable value proposition.

In addition, a few assumptions need to be made in order to affect change:

1-      The individual (or a family member or duly appointed health advocate) needs to assume responsibility for their healthcare delivery and wellness strategy.

2-      More accurate, richer personal healthcare records along with access to anonymized, aggregated healthcare information and data can increase an individual’s chances for improved health outcomes at a lower cost.

3-      Data has value. Some say data is the oil of the 21st century. Payers, pharma, healthcare analytics firms and other for profit consumers of healthcare records data need to compensate individuals directly for their anonymized data – pay for it through a PHR broker, offer an incentive through lower premiums, lower cost of prescription drugs or provide individual access to outcomes data and wellness information.

Individuals also need to understand that much of the data being collected today is incomplete or inaccurate and is now “owned” by companies who have their corporate customers’, not the individual’s, best interests in mind.

Imagine following a recipe that has key ingredients missing.

Imagine all of the incorrect data being used by researchers to make multi-billion dollar decisions on treatment modalities and future clinical trial investments.

According to another blog post by Shah (The Healthcare IT Guy) entitled Causes of digital patient privacy loss in EHRs and other Health IT systems  “Business models that favor privacy loss tend to be more profitable. Data aggregation and homogenization, resale, secondary use, and related business models tend to be quite profitable. The only way they will remain profitable is to have easy and unfettered (low friction) ways of sharing and aggregating data. Because enhanced privacy through opt-in processes, disclosures, and notifications would end up reducing data sharing and potentially reducing revenues and profit, we see that privacy loss is going to happen with the inevitable rise of EHRs.”

During a podcast conducted last month by KCRW entitled, Big Data for Healthcare: What about patient privacy?, Dr. Deborah Peel noted, “Powerful data mining companies are collecting intimate data on us. We want the choice of sharing data only with those we know and trust to collect our data. Patients have more interest and stake in data integrity and patient safety than any other stakeholders.”

Dr. Peel calls this “Partnership with consent”. Unencumbered by the “expensive legal and contractual processes and burdens of institutions, and without the need for expensive, complex technologies and processes to verify identity, patients can move PHI easily, more cheaply, and faster than institutions. The lack of ability to conveniently and efficiently update demographic data is one of the top complaints the public has about Healthcare IT systems.

Health technology systems violate our federal rights to see who used our data and why. Despite the federal right to an Accounting of Disclosures (AODs) – the lists of who accessed our health data and why – technology systems violate this right to accountability and transparency.”

No doubt, Dr. Peel and Shah are correct. Yet, as healthcare data collection, management and analysis tools mature and become more mainstream, it is clear that buried within the ever-increasing tower of electronic rubble that is Big Data are insights waiting to be liberated.

It is becoming increasingly rare to find a clinician who would not agree that access to machine derived data and information has and will continue to have a positive impact on a variety of health outcomes.

To paraphrase Iya Khalil, co-founder of healthcare Big Data analytics company GNS Healthcare who was also a panelist on the above mentioned KCRW podcast “What-if scenarios for the future are more predictive and will rely more on precision medicine not on an “average” patient. Big Data gives this insight. Painting a more accurate picture of an individual’s health through more accurate data enables patients and doctors.”

The description of illnesses change over time along with the way individuals talk about their lifestyle or detail their personal narrative. Rather than focusing on structuring as much as possible like present day EHR solutions and billing systems would like to have us do, the need to capture each individual’s personal health narrative in their own “voice” is critical.

Moreover, individuals are more likely to recognize errors, mismatches and omissions – some of them potentially harmful – than institutions or machines. Individuals can and would be willing to verify information if engaged and properly incented.

The Solution: a True PHR System of Engagement

As mentioned in the Integrated Healthcare Data Repository (HDR) section above, there are at least 21 NoSQL Solutions Innovators that offer proven, scalable, secure, affordable solutions to augment or replace expensive, less flexible traditional databases that primarily rely on structured query languages (SQL) to extract information.

Example: PHR and Rare Illness Data Exchange Repository (RIDER)

PHR and RIDER Data Flows

The PHR/RIDER is a concept that combines at least two existing business models: A place for individuals to store and compare their PHRs and a secure database for exchanging multiple levels of information with other individuals, providers or other entities. The solution affords the individual with protection, choice and management tools at no cost other than their time.

The specific focus on rare illnesses addresses a need expressed by the NIH and others for an increased effort on collecting data from smaller health populations both in the U.S. and abroad where increasing and combining data sets may support additional insights and healthcare breakthroughs.

In part, the NIH states, “Rare diseases comprise a clinically heterogeneous group of approximately 6,500 disorders each occurring in fewer than 200,000 persons in the USA. They are commonly diagnosed during childhood, frequently genetic in origin, and can have deleterious effects on long-term health and well-being. Although any given condition is rare, their cumulative public health burden is significant with an estimated 6-8% of individuals experiencing a rare disease at some point during their lives.”

Fewer than 20% of rare diseases have registries – an indication that most pharmaceutical and biotech firms see little financial return in pursuing cures for rare or so called “orphaned” diseases.

The intent of RIDER is to manage multiple data sets of rare illnesses, or diseases, letting the machine do the heavy work of data abstraction and analytics to determine if any significant correlations exist between the data sets, and then making aggregate data available to the individual contributors as well as carefully selected healthcare providers or other members of the healthcare community.

Using mostly open source components including a NoSQL database, analytics and cloud computing infrastructure, small or large non-profit and for profit groups can more easily afford to implement a true, patient-focused portal that has lowering the individual’s costs and health outcomes as its primary mission.

PHR/RIDER Benefits

Savings and Cost Avoidance

  • Spend thousands not millions on integrating healthcare data
    • Provides Individuals with free access to PHI and health information
    • Lower clinical trial costs with supplemental data
    • Support and accelerate research efforts
      • Avoid time consuming manual workflows
  • Lower cost of care with supporting data
    • More quickly determine top treatment options
    • Eliminate need to gather PHI from multiple sources
    • More easily compare provider costs and efficacy

Improve Outcomes

  • Support clinical, research efforts with holistic data views, visualization
    • Run analytics against holistic data sets to improve treatment choices
    • Discover and predict needs for orphaned patient populations
  • Partner with providers, payers, pharma, HIEs on aggregated, anonymized data
    • Data to supplement clinical trials
    • Data to supplement payer wellness programs
    • Data to supplement academic medical center research
    • Data to supplement approved health informatics programs
  • Develop a true data partnership with your patients and their families

Conclusion

According to survey data recently released by the analytics division of HIMSS, patient portals and clinical data warehousing/mining are two of the top three new applications poised for growth among hospitals over the next five years. “The findings presented in this report suggest there is an opportunity for vendors and consultants to assist hospital leaders in their efforts to improve care by helping them realize their full EMR capabilities. Of the three applications predicted to dominate first-time sales in hospitals, the patient portal market opportunity is the only one clearly tied to the Meaningful Use Stage 2 requirements.”

Truly patient-centric solutions that support MU core objectives intended to improve patient communication, enable secure, authorized individual access to and transfer of personal health records (PHR), and extend functionality beyond the limits of today’s EHR/EMR solutions to support dramatically enhanced clinical informatics and MU mandated compliance capabilities will dominate the new healthcare IT systems sales beginning in 2014.

Newer approaches to lowering solution costs while improving functionality and ameliorating security concerns can be achieved by leveraging open source and NoSQL database and query technologies. In addition, standards, such as JSON for documents and unstructured text, which have been widely adopted outside of the healthcare industry and generally available, secure and maturing cloud services need to be strongly considered.

Finally, senior hospital management and providers of all sizes need to understand the inherent limitations of this generation’s crop of EHR/EMR solutions. Billing, practice management, quality reporting and meeting a number of other meaningful use requirements are extremely important objectives. However, buying into the hype that EMRs can properly function as the central repository for critical data consolidated from a dozen or more systems within the provider universe is misguided – at best.

Individuals and providers must work together to update, manage and secure healthcare data to increase the likelihood that information contained in medical records is used to improve individual wellness or aggregated to drive insights that benefit health populations. To that end, there are indeed viable, innovative and affordable solutions to pursue.

 

Posted in Big Data, Healthcare Informatics, Information Governance, Information Management Best Practices, Information Management Thought Leadership, Strategic Information Management | Tagged , , , , , , , , , , , , , , , , , , , , , | Leave a comment

21 NoSQL Innovators to Look for in 2020

Disruptive NoSQL Database SolutionsIn the ever-evolving world of enterprise IT, choice is generally considered a good thing –albeit having too many choices can create confusion and uncertainty. For those application owners,  database administrators and IT directors who pine for the good old days when one could count the number of enterprise-class databases (DBs) on one or two hands, the relational-database-solves-all-our-data-management-requirements days are long gone.

Thanks to the explosion of Big Data throughout every industry sector and requirements for real-time, predictive and other forms of now indispensable transactions and analytics to drive revenue and business outcomes, today there are more than 50 DBs in a variety of categories that address different aspects of the Big Data conundrum. Welcome to the new normal world of NoSQL – or, Not only Structured Query Language – a term used to designate databases which differ from classic relational databases in some way.

In August, more than 20 NoSQL solution providers and 100-plus experts gathered at the San Jose Convention Center for 2013’s version of NoSQL Now!. Exhibitors and speakers included familiar names such as Oracle along with a score of venture-backed NoSQL solution providers eager to disseminate their message and demonstrate that the time has come for enterprises of every ilk to adopt innovative database solutions to tackle Big Data challenges. More than a dozen sponsors were interviewed at the event and profiled in this research note. 

Evolution of NoSQL

In the beginning, there was SQL (structured query language). Developed by IBM computer scientists in the 1970s as a special-purpose programming language, SQL was designed to manage data held within a relational database management system (RDBMS). Originally based on relational algebra and tuple relational calculus, SQL consists of a data definition language and a data manipulation language. Subsequently, SQL has become the most widely used database language largely due to the popularity of IBM, Microsoft and Oracle RDBMSs.

NoSQL DBs started to emerge and become enterprise-relevant in the wake of the open-source movement of the late 1990s. Aided by the movement toward Internet-enabled online transaction processing (OLTP), distributed processing leveraging the cloud and the inherent limitations of relational DBs, including lack of horizontal scale, flexibility, availability, findability and high cost, use of NoSQL databases has mushroomed.

Amazon’s instantiation of DynamoDB is considered by many as the first large-scale, or web-scale, production NoSQL database. To quote author Joe Brockmeier, who now works for Red Hat, “Amazon’s Dynamo paper is the paper that launched a thousand NoSQL databases.” Brockmeier suggests that the “paper inspired, at least in part, Apache Cassandra, Voldemort, Riak and other projects.”

According to Amazon CTO Werner Vogels, who co-authored the paper entitled Dynamo: Amazon’s Highly Available Key-value Store, “DynamoDB is based on the principles of Dynamo, a progenitor of NoSQL, and brings the power of the cloud to the NoSQL database world. It offers customers high availability, reliability, and incremental scalability, with no limits on dataset size or request throughput for a given table.” DynamoDB is the primary DB behind the wildly successful Amazon Web Services business and its shopping cart service that handles over 3 million “checkouts” a day during the peak shopping season.

As a result of the Amazon DynamoDB and other enterprise-class NoSQL database proof points, it is not uncommon for an enterprise IT organization to support multiple NoSQL DBs alongside legacy RDBMSs. Indeed, there are single applications that often deploy two or more NoSQL solutions, e.g., pairing a document-oriented DB with a graph DB for an analytics solution. Perhaps the primary reason for the proliferation of NoSQL DBs is the realization that one database design cannot possibly meet all the requirements of most modern-day enterprises – regardless of the company size or the industry. 

The CAP Theorem

In 2000, Berkeley, CA, researcher Eric Brewer published his now foundational CAP Theorem (consistency, availability and partition tolerance) which states that it is impossible for a distributed computer system to simultaneously provide all three CAP guarantees. In May 2012, Brewer clarified some of his positions on the oft-used “two out of three” concept. 

  • Consistency (all nodes see the same data at the same time)
  • Availability (a guarantee that every request receives a response about whether it was successful or failed)
  • Partition Tolerance (the system continues to operate despite arbitrary message loss or failure of part of the system).

NIST CAP Slide with Big Data

According to Peter Mell, a senior computer scientist for the National Institute of Standards and Technology, “In the database world, they can give you perfect consistency, but that limits your availability or scalability. It’s interesting, you are actually allowed to relax the consistency just a little bit, not a lot, to achieve greater scalability. Well, the Big Data vendors took this to a whole new extreme. They just went to the other side of the Venn diagram, and they said we are going to offer amazing availability or scalability, knowing that the data is going to be consistent eventually, usually. That was great for many things.” 

ACID vs. BASE

In most organizations, upwards of 80% of Big Data is in the form of “unstructured” text or content, including documents, emails, images, instant messages, video and voice clips. RDBMSs were designed to manage “structured” data in manageable fields, rows and columns such as dates, social security numbers, addresses and transaction amounts. ACID Atomicity, Consistency, Isolation, Durability) is a set of properties that guarantees database transactions are processed reliably and is a necessity for financial transactions and other applications where precision is a requirement.

Conversely, most NoSQL DBs tout their schema-less capability, which ostensibly allows for the ingestion of unstructured data without conforming to a traditional RDBMS data format or structure. This works especially well for documents and metadata associated with a variety of unstructured data types as managing text-based objects is not considered a transaction in the traditional sense. BASE (basically available, soft state, eventually consistent) implies the DB will, at some point, classify and index the content to improve the findability of data or information contained in the text or the object.

Increasingly, a number of database cognoscenti believe NoSQL solutions will or have overcome the “ACID test” as availability is said to trump consistency – especially in the vast majority of online transaction use cases. Even Eric Brewer argued recently that bank transactions are BASE not ACID  because availability = $. 

NoSQL Database Categories

As will be seen in the following section, NoSQL DBs simultaneously defy description and define new categories for NoSQL databases. Indeed, many NoSQL vendors possess capabilities and characteristics associated with more than one category, making it even more difficult for users to differentiate between solutions. A good example is the following taxonomy provided by Cloud Service Provider (CSP) Rackspace, which classifies NoSQL DBs by their data model.

NoSQL Data Models from Rackspace Updated

Note: In the original slide, Riak is depicted as a “Document” data model. According to Riak developer Basho, Riak is actually a key-value data model and its query API (application programming interface) is the popular web REST API as well as protocol buffers.

The chart above represents the five major NoSQL data models: Collection, Columnar, Document-oriented, Graph and Key-value. Redis is often referred to as a Column or Key-value DB, and Cassandra is often considered a Collection. According to Technopedia, a Key-Value Pair (KVP) is “an abstract data type that includes a group of key identifiers and a set of associated values. Key-value pairs are frequently used in lookup tables, hash tables and configuration files.” Collection implies a way documents can be organized and/or grouped.

Yet another view, courtesy of Beany Blog, describes the database space as follows:

Cap Theorem media_httpfarm5static_mevIk

“In addition to CAP configurations, another significant way data management systems vary is by the data model they use: relational, key-value, column-oriented, or document-oriented (there are others, but these are the main ones).

  • Relational systems are the databases we’ve been using for a while now. RDBMSs and systems that support ACIDity and joins are considered relational.
  • Key-value systems basically support get, put, and delete operations based on a primary key.
  • Column-oriented systems still use tables but have no joins (joins must be handled within your application). Obviously, they store data by column as opposed to traditional row-oriented databases. This makes aggregations much easier.
  • Document-oriented systems store structured ‘documents’ such as JSON or XML but have no joins (joins must be handled within your application). It’s very easy to map data from object-oriented software to these systems.”

Beany Blog omits the Graph database category, which has a growing number of entrants in the space, including; Franz Inc., Neo4j, Objectivity and YarcData. Graph databases are designed for data whose relations are well represented as a graph, e.g., visual representations of social relationships, road maps or network topologies and representation of “ownership” for documents within an enterprise for legal or ediscovery purposes.

Hadoop and NoSQL 

The Hadoop Distributed File System (HDFS) is an Apache open-source platform that enables applications, such as petabyte-scale Big Data analytics projects, to potentially scale across thousands of commodity servers such as Intel standard x86 servers, dividing up the workload.

HDFS includes components derived from Google’s MapReduce and Google File System (GFS) papers as well as related open-source projects, including Apache Hive, a data warehouse infrastructure initially developed by Facebook and built on top of Hadoop to provide data summarization, query and analysis support; and Apache HBase and Apache Accumulo, both open-source NoSQL DBs, which, in the parlance of the CAP Theorem, are CP DBs and are modeled after the BigTable DB developed by Google. Facebook purportedly uses HBase to support its data-driven messaging platform while the National Security Agency (NSA) supposedly uses Accumulo for its data cloud and analytics infrastructure.

In addition to the HBase, MarkLogic 7 and Accumulo native integrations of HDFS, several NoSQL DBs can be used in conjunction with HDFS, whether they are open source and community supported or proprietary in nature, including Couchbase, MarkLogic, MongoDB or Oracle’s version of NoSQL based on the Berkeley open-source DB. As Hadoop is inherently a batch-oriented paradigm, additional DBs to handle in-memory processing or real-time analysis are needed. Therefore, NoSQL – as well as RDBMS – solution providers have developed connectors for allowing data to be passed between HDFS and their DBs.

Datastax NoSQL and Hadoop slide

The slide above, courtesy of DataStax, illustrates how NoSQL and Hadoop solutions are transforming the way both transactional and analytic data are handled within enterprises with large volumes of data to manage both in real-time, or near real-time, and post-processing or after data is updated or archived.

NoSQL DB Funding and Growth 

A recent note written by Wikibon’s Jeff Kelly, Hadoop-NoSQL Software and Services Market Forecast 2012-2017, gives a good indication of how well funded and fast growing the market for RDBMS alternatives has become.

“The Hadoop/NoSQL software and services market reached $542 million in 2012 as measured by vendor revenue. This includes revenue from Hadoop and NoSQL pure-play vendors – companies such as Cloudera and MongoDB – as well as Hadoop and NoSQL revenue from larger vendors such as IBM, EMC (now Pivotal) and Amazon Web Services. Wikibon forecasts this market to grow to $3.48 billion in 2017, a 45% CAGR [compound annual growth rate] during this five-year period.” Kelly forecasts the NoSQL portion of the market to reach nearly $2 billion by 2017.

Kelly’s research also indicates that the top ten companies in the space, measured in amount of funding dollars, received more the $600 million over the last 5 years, with funding increasing dramatically over the last 3 years, including $177 million for 2013 thus far. The top-funded NoSQL DB companies – in order of total funding amount – include DataStax (Cassandra), MongoDB, MarkLogic, MapR, Couchbase, Basho (creator of Riak), Neo Technology (creator of Neo4j) and Aerospike.

Note:  On October 4th 2013, MongoDB announced it had secured $150 million in additional funding which would now make it the top-funded company in the space.

21 for 2020: NoSQL Innovators

As previously mentioned, there are now more than 50 vendors that have entered the NoSQL DB software and services space. As is the case with most nascent technology markets, more companies will emerge and others will buy their way into the market, fueling the inevitable surge of consolidation.

Oracle has publicly committed to its Berkeley DB open-source version of NoSQL, while IBM offers support for Hadoop and MongoDB solutions as part of its InfoSphere information management platform as well as Hadoop enhancements for its PureData System, and Microsoft supports a variety of NoSQL solutions on its Windows Azure cloud-based storage solution. Suffice to say, the big three RDBMS vendors are pragmatic about the future of databases. Sooner or later, expect them all to make NoSQL acquisitions.

Meanwhile, here is a short list of companies anticipated to disrupt the database space over the next 5 to 7 years arranged in somewhat different categories from the above NoSQL taxonomies and based more on use case within the enterprise than on data model.

This group is also distinguished by added capabilities or functionality beyond just providing a simple data store with the inclusion of analytics, connectors (interoperability with other DBs and applications), data replication and scaling across commodity servers or cloud instances.

Disruptive NoSQL Database Solutions

 

 

Follow this link for brief profiles of these 21 NoSQL Innovators.

 

 

Note: Not all of these solutions are strictly NoSQL-based, including NuoDB and Starcounter, two providers that refer to their databases as “NewSQL”; and Virtue-Desk, which refers to its DB as “Associative.” All three get lumped into the NoSQL category because they offer alternatives to traditional RDBMS solutions.

Note: One could argue that other categories such as [http://en.wikipedia.org/wiki/Embedded_database Embedded Databases] could also be included. In over 20 hours of interviews, only 2 NoSQL solution providers, Oracle Berkeley DB and Virtue-Desk, mention embedding their databases within applications. In the case of Virtue-Desk, its solution is written entirely in Assembler and can be embedded in “any” device that has more the 1MB of memory – the DB is only 600k installed. 

Note: The clear trend for non-relational database deployment is for enterprises to acquire multiple DBs based on application-specific needs, what could be referred to as software-defined database adoption.

 

 

 

Posted in Big Data, Information Management Thought Leadership | Tagged , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Dell – The Privatization Advantage

Dell LogoThe privatization of Dell Inc. closes a number of chapters for the company and puts it more firmly on a different course. The Dell of yesterday was primarily a consumer company with a commercial business, both with a transactional model. The new Dell is planned to be a commercial-oriented company with an interest in the consumer space. The commercial side of Dell will attempt to be relationship driven while the consumer side will retain its transactional model. The company has some solid products, channels, market share, and financials that can carry the company through the transition. However, it will take years before the new model is firmly in place and adopted by its employees and channels and competitors will not be sitting idly by. IT executives should expect Dell to pull through this and therefore should take advantage of the Dell business model and transitional opportunities as they arise.

Shareholders of IT giant Dell approved a $24.9bn privatization takeover bid from company founder and CEO Michael Dell, Silver Lake Partners, banks and a loan from Microsoft Corp. It was a hard fought battle with many twists and turns but the ownership uncertainty is now resolved. What remains an open question is was it worth it? Will the company and Michael Dell be able to change the vendor’s business model and succeed in the niche that he has carved out?

Dell’s New Vision

After the buyout Michael Dell spoke to analysts about his five-point plan for the new Dell:

  • Extend Dell’s presence in the enterprise sector through investments in research and development as well as acquisitions. Dell’s enterprise solutions market is already a $25 billion business and it grew nine percent last quarter – at a time competitors struggled. According to the CEO Dell is number one in servers in the Americas and AP, ships more terabytes of storage than any competitor, and completed 1,300 mainframe migrations to Dell servers. (Worldwide IDC says Hewlett-Packard Co. (HP) is still in first place for server shipments by a hair.)
  • Expand sales coverage and push more solutions through the Partner Direct channel. Dell has more than 133,000 channel partners globally, with about 4,000 certified as Preferred or Premier. Partners drive a major share of Dell’s business.
  • Target emerging markets. While Dell does not break out revenue numbers by geography, APJ and BRIC (Brazil, Russia, India and China) saw minor gains over the past quarter year-over-year but China was flat and Russia sales dropped by 33 percent.
  • Invest in the PC market as well as in tablets and virtual computing. The company will not manufacture phones but will sell mobile solutions in other mobility areas. Interestingly, he said Dell is a commercial seller more than in the consumer space now when it comes to end user computing. This is a big shift from the old Dell and puts them in the same camp as HP. The company appears to be structuring a full-service model for commercial enterprises.
  • “Accelerate an enhanced customer experience.” Michael Dell stipulates that Dell will serve its customers with a single-minded purpose and drive innovations that will help them be more productive, grow, and achieve their goals.

Strengths, Weaknesses, Challenges and Competition

With the uncertainty over, Dell can now fully focus on execution of plans that were in
place prior to the initial stalled buyout attempt. Financially Dell has sufficient funds to
address its business needs and operates with a strong positive cash flow. Brian Gladden,
Dell’s CFO, said Dell was able to generate $22 billion in cash flow over the past five
years and conceded the new Dell debt load would be under $20 billion. This should give
the company plenty of room to maneuver.

In the last five quarters Dell has spent $5 billion in acquisitions and since 2007 when
Michael Dell returned as CEO, it has paid more than $13.7 billion on acquisitions.
Gladden said Dell will aim to reduce its debt, invest in enhanced and innovative product
and services development, and buy other companies. However, the acquisitions will be of
a “more complimentary” type rather than some of the expensive, big-bang deals Dell has
done in the past.

The challenge for Dell financially will be to grow the enterprise segments faster than the
end user computing markets collapse. As can be noted in the chart below, the enterprise
offerings are less than 40 percent of the revenues currently and while they are growing
nicely, the end user market is losing speed at a more rapid rate in terms of dollars.

Source: Dell's 2Q FY14 Performance Review

Source: Dell’s 2Q FY14 Performance Review

Dell also has a strong set of enterprise products and services. The server business does
well and the company has positioned itself well in the hyperscale data center solution
space where it has a dominant share of custom server sales. Unfortunately, margins are
not as robust in that space as other parts of the server market. Moreover, the custom
server market is one that fulfills the needs of cloud service providers and Dell will have
to contend with “white box” providers and lower prices and shrinking margins going
forward. Networking is doing well too but storage remains a soft spot. After dropping out
as an EMC Corp. channel partner and betting on its own acquired storage companies,
Dell lost ground and still struggles in the non-DAS space to gain the momentum needed.
The mid-range EqualLogic and higher-end Compellent solutions, while good, have stiff
competition and need to up their game if Dell is to become a full-service provider.

Software is growing but the base is too small at the moment. Nonetheless, this will prove to be an important sector for Dell going forward. With major acquisitions (such as Boomi, KACE, Quest Software and SonicWALL) and the top leadership of John Swainson, who has an excellent record of growing software companies, Dell software is poised to be an integral part of the new enterprise strategy. Meanwhile, its Services Group appears to be making modest gains, although its Infrastructure, Cloud, and Security services are resonating with customers. Overall, though, this needs to change if Dell is to move upstream and build relationship sales. In that the company traditionally has been transaction oriented, moving to a relationship model will be one of its major transformational initiatives.

This process could easily take up to a decade before it is fully locked in and units work well together. Michael Dell also stated “we stand on the cusp of the next technological revolution. The forces of big data, cloud, mobile, and security are changing the way people live, businesses operate, and the world works – just as the PC did almost 30 years ago.” The new strategy addresses that shift but the End User Computing unit still derives most of its revenues from desktops, thin clients, software and peripherals. About 40 percent comes from mobility offerings but Dell has been losing ground here. The company will need to shore that up in order to maintain its growth and margin objectives.

While Dell transforms itself, its competitors will not be sitting still. HP is in the midst of its own makeover, has good products and market share but still suffers from morale and other challenges caused by the upheavals over the last few years. IBM Corp. maintains its version of the full-service business model but will likely take on Dell in selected markets where it can still get decent margins. Cisco Systems Inc. has been taking market share from all the server vendors and will be an aggressive challenger over the next few years as well. Hitachi Data Systems (HDS), EMC, and NetApp Inc. along with a number of smaller players will also test Dell in the non-DAS (direct attached server) market segments. It remains to be seen if Dell can fend them off and grow its revenues and market share.

Summary

Michael Dell and the management team have major challenges ahead as they attempt to change the business model, re-orient people’s mindsets, develop innovative, efficient and affordable solutions, and fend off competitors while they slowly back away from the consumer market. Dell wants to be the infrastructure provider for cloud providers and enterprises of all types – “the BASF inside” in every company. It still intends to do this by becoming the top vendor of choice for end-to-end IT solutions and services. As the company still has much work to do in creating a stronger customer relationship sales process, Dell will have to walk some fine lines while it figures out how to create the best practices for its new model. Privatization enables Dell to deal with these issues more easily without public scrutiny and sniping over margins, profits, revenues and strategies.

Bottom Line

Dell will not be fading away in the foreseeable future. It may not be so evident in the consumer space but in the commercial markets privatization will allow it to push harder to remain or be one of the top three providers in each of the segments it plays in. The biggest unknown is its ability to convert to a relationship management model and provide a level of service that keeps clients wanting to spend more of their IT dollars with Dell and not the competition. IT executives should be confident that Dell will remain a reliable, long-term supplier of IT hardware, software and services. Therefore, where appropriate, IT executives should consider Dell for its short list of providers for infrastructure products and services, and increasingly for software solutions related to management of big data, cloud and mobility environments.

Posted in Big Data, Information Management Thought Leadership | Tagged , , , , , , , | Leave a comment