The rise of the Internet of Things has thrown up many curious challenges, not least of which is where on earth all of that extra data will sit, and how it can be deployed effectively. 

Cue the arrival of data lakes - huge repositories that feed data to users on as needed basis (think data as a utility). Enterprises have been increasingly struggling to handle the ever-growing streams of new data. Data lakes may be the answer.

The concept of data lakes has been around for a while - in fact Jed Mole of Acxiom was writing about them over a year ago. But it appears to be a growing topic of interest for larger enterprises.

This recent Network World piece expands on the principle.