16 Nov We’re Reinventing the Query Builder (Well, Kinda)
Ah, remember the query builder?
In the brief history of the data ecosystem, the query builder already seems like a relic of a bygone era. Compared to predictive analytics, machine learning, and all the innovation going on around Big Data, the query builder is hardly a shiny new thing.
Basically, you could say it’s an old interface for old data technologies.
And yet the query builder has hardly gone away. Considering that there are millions of business users and data analysts using legacy BI systems or (gasp!) Microsoft Access, it’s still one of the most common and widely used data tools. In fact, many more users interact with the humble query builder every day than with Big Data tools or even common statistical technologies, such as R.
So the query builder still plays a prominent role among the ever-growing cast of data tools, but is it on its way to exiting stage right?
Traditional query builders have 3 core functions, and all of them are slipping into irrelevance
Perhaps you’ve never seen or used a query builder before. If so, it’s probably because it only lives within the walled gardens of enterprise BI platforms and database systems – it’s not a common widget or component you’d find on the Web. Generally speaking, it has one main job, which is to make it easier to work with data.
In fact, within the BI/database world, the query builder has three super powers:
- Getting access to relevant data. With a data project, data access is always the first step. A query builder provides a really handy way to select your data sets and narrow down the rows and columns to only the important stuff.
- Creating data operations visually. It’s much faster and easier to build queries visually. Instead of typing out lengthy structured query (SQL) statements and then fixing any mistakes manually, you can quickly get the same results by clicking around in a graphical interface.
- Transforming data without writing code. Perhaps best of all, using a query builder doesn’t require knowing how to write code in SQL, Python, R or other data processing languages. And so even non-technical business users and data analysts can use them.
However, we’re rapidly moving into a new reality where these super powers have encountered some kryptonite.
For all of its best qualities, the traditional query builder has one significant limitation – it’s tied to structured data sets and the walled database systems where structured data lives.
But increasingly, the most relevant data sources are now found on the Web. Instead of BI platforms and database systems, the data comes from Web-based APIs and feeds from devices connected to the Internet of Things. Instead of neatly structured tables, the data comes in text formats such as CSV, JSON and XML. The data is a raw, incoming stream of files that need to be combined and processed.
In this brave new world, users still need an easy way to work with data, but it requires a better approach.
Pipes are the new queries
Enter data pipes.
Like a query, a pipe takes a data set and transforms it into something useful. But unlike a query, a pipe is far more flexible – it’s not tied to a database or a single data format. And instead of packing everything into a query statement, pipes offer greater simplicity and clarity by breaking down complex tasks into a series of granular steps, while offering a much wider array of operations.
Push data in, run the pipe steps, and get the results. Pretty sweet.
With the flood of data sources migrating to the cloud, we believe data pipes are the new queries.
At Flex.io, we’re working to create a Web service for building data pipes. We’re calling it “flexible input and output”, but really, it’s just a new kind of query builder, reinvented for the Web.
The real trick is to keep the best qualities of the traditional query builder, but adapt it for the new realities of Web-based data. This means making it easy to access relevant data, creating pipe steps visually, and giving non-technical users the ability to transform data without writing code. It also means connecting to Web-based data sources, processing multiple input files, and handling multiple file formats.
Here’s a sample pipe that offers small taste of what we have in mind (click to view it):
This particular pipe doesn’t really do anything terribly special. It’s just a proof of concept showing a data input and a granular set of steps that can be run again and again.
In the coming weeks, we’ll be showing off a lot of new pipe examples. And very soon, we’ll be opening up our private beta, so you can start building your own data pipes as well.
Much more to come. Stay tuned, and please let us know what you think!