The Push and Pull of Enterprise Data Collaboration

April 02, 2015 by Ken Kaczmarek

Like Doctor Dolittle’s famed pushmi-pullyu, enterprise data projects have a serious agility problem.

A recent story comes to mind.

I was on a call with prospective customer to request data feeds for an analytics project.  We had performed a successful trial project, the business case was approved by management and everyone was happy. At last, we were ready to go into production.

We simply needed data feeds for two well-known, standard tables from a commonly used enterprise system. We’d already worked with an extract and had everything set with the integration and analysis on our end. Now it was just a matter of flipping the “switch” to make it repeatable. Easy peasy, right?

So my conference call ensued—with at least 8 people.  We had a business liaison, a technical project leader, an ERP analyst, an EDI analyst, at least two extract developers, a security lead and, of course, me, the consultant.

Then came the questions. What data sources again? All the fields? What about format types? What time do you want it run?  Wait, what time zone? What about SFTP credentials?  Delta or full set? What’s the exact date range again? There’s no PII, right?  Who’s handling this?  Who’s handling that?

Sigh.

Llama group

Long story short: It took over 2.5 months to get the data feed set up — all for a straightforward case without any complexity to speak of.

Sadly, this scenario is far from uncommon.  I can give countless examples of similar data requests (even just one-off extracts) that took ages to deliver from a wide variety of organizations. Perhaps then it’s little surprise that so many conversations about new data projects start with: “If we can get the data…“.

Let’s look at a few reasons why this happens:

  • IT departments are completely buried, since they touch everything.  There’s no question that IT is beaten down by a zillion requests; hence the big trends toward self-service BI and end-user development.  But IT remains the gatekeeper, owner and guardian of corporate data.  Until data is either democratized or IT has simpler way to get hands in the data of an end-user, this bottleneck will continue.
  • The process for delivering data isn’t built for agility. Part of the reason data requests are backlogged is because there isn’t a lot of freedom to handle them differently.  For instance, is there any extra prioritization for data projects that are “operationally critical” v. “here’s an idea we want to try.”? If not, least common denominator and 12-page checklist it is.  This approach does not play well with minimizing lead times, shortening iteration cycles and addressing issues rapidly.
  • Communication hasn’t kept up with individualized domain expertise.  The business user, the IT analyst and the outside vendor or consultant all bring key knowledge into a project to make it successful.  Yet, collaboration happens over months via fragmented emails, phone calls, scribbled notes and start-and-stop efforts.  Lack of documentation with the original data extract leads to duplication of efforts when it is later automated.  Projects are mired in inefficiency due to lack of data context and shared knowledge.

Dealing with the multiple “heads” of data collaboration takes patience and perseverance. But, on the bright side, data projects do end up getting done — eventually.  The real loss are all the innovative ideas, new markets and cost savings that could have been explored, but simply fell by the wayside due to such a long and winding road.

What would it look like if data collaboration were truly agile and iterative and efficient?  Well, to begin with, here are 5 actionable steps to make data projects faster.

Does the data request issue resonate with you and your experience? I’d love to hear your story in the comments.


Images by: BlackburnPhoto (modified per CC license) and Logan Scholfield