If you’ve been in the technology business long enough, you remember the “database wars” of the 1990s. During that era, there were more than 15 different types of databases, all vying to house your company’s data. There were so many database options that knowing where to house your data, and how to access it, became quite an issue. As Y2K rolled around, however, the database vendors dwindled back down to a much more manageable number.

So much content is generated these days that accelerated processing power and disk storage access is required. Today, with offerings from major players like Oracle and Salesforce, along with open source databases like Apache Hadoop Hive, we are getting back up there in the terms of database offerings. The industry is becoming inundated once again, which is causing data to be siloed.

Understanding the battle at hand

We can thank two megatrends for the explosion of databases that are flooding the market today. The first is Big Data. Every day, we create 2.5 quintillion bytes of data — so much that 90% of the data in the world has been created in the last two years alone. This data comes from everywhere: sensors used to gather climate information, posts to social media sites, digital pictures and videos, purchase transaction records, and cell phone GPS signals to name a few. This data is BIG Data, and the volume of it that needs to be managed by applications is increasing dramatically.

It is not only a problem of volume, however, as the velocity and variety of data are also increasing. Data at rest, like the petabytes of data managed by the world’s largest Hadoop clusters, needs to be accessed quickly, reliably, and securely. Data in motion, like your location, needs to be analysed immediately – before the window on the fleeting opportunity or preventable threat closes. Big Data and the introduction of Apache Hadoop as a high-volume distributed file system has drawn a line in the sand for the first battle in the new database wars.

The second factor attributing to the resurgence of the database wars is cloud computing. The Cloud is reshaping the way we as an industry build and deploy software. Its economics and usability are clear as the cloud is enabling the next generation of ISVs and applications to be built in less time, at a lower cost, all while increasing the scalability and resiliency of the applications.

In fact, ISVs are ahead of the curve – according to Gartner, over 50% of ISVs are building pure cloud applications within the next 3 years, and 20% of IT spending in the next 3 years is going to cloud- and SaaS-based services. As the market transitions from on-premise to cloud, the use of hybrid applications will become more popular, once again changing the rules for how we access and use data.

According to 451 Research, the factors driving data management technologies include scalability, performance, relaxed consistency, agility, intricacy, and necessity. NoSQL projects were developed in response to the failure of existing suppliers to meet these needs.

While the NoSQL offerings are closely associated with Web application providers, the same drivers have spurred the adoption of data-grid/caching products and the emergence of a new breed of relational database products and vendors. For the most part, these database alternatives are not designed to directly replace existing products, but to offer purpose-built alternatives for workloads that are unsuited to general-purpose relational databases.

NewSQL and data-grid products have emerged to meet similar requirements among enterprises, a sector that is now also being targeted by NoSQL vendors. The list of new database players with alternative management methods is growing seemingly exponentially. And, in today’s wars, the backdrop is no longer the on-premise databases of yesteryear; today’s wars are happening in the cloud.

Fighting on the frontline – ways to win

So what does this mean for an enterprise that needs to access its data from a number of diverse Cloud sources? What light saber exists in today’s world to aid IT managers and application developers in these fierce wars? How can we keep up with this explosion in data sources in the cloud?

One of the biggest weapons that today’s IT workers have at their disposal is a premium data connectivity service. Although point-to-point connectivity might be available, creating unique access calls that conform to every database API becomes too unwieldy and complex. There are too many different APIs and too many different versions of those APIs making your application way too complicated to maintain. Even for on-premise applications, the changes across all of these cloud data sources are just too frequent to manage.

There is a much better way to connect and access the multitude of cloud data sources – a single pipe into a connectivity management service that sits in the cloud. The call from your application conforms to standard SQL queries along with a quick selection of which cloud data source you need to connect with.

The connectivity service executes the SQL query against the appropriate cloud data source, managing all of the complexity, APIs, and version control itself so that your application does not have to. This Connectivity as a Service provides standards-based SQL access and connectivity management to the cloud. The service allows you to pay for only what you consume or how many cloud data sources you might need.

This is the beauty of Connectivity as a Service – it enables you to focus on your application, while the connectivity management service keeps up with versions and API changes. So even as the database wars heat up, premium data connectivity solutions will help you cool down – accessing and analysing your data no matter where it may live!