Regardless of your organization’s product and size, your data stack has likely come a long way — and likely still has a long way to go. To celebrate the new year, take a look back at data stack trends from the past 12 months and read expert recommendations on how to uplevel your data stack strategy in 2019.
First, a Look Back at Data Stack Trends in 2018
In 2018, many companies were still figuring out how to get customer data into their stack. And this isn’t the fault of the organizations themselves — it’s challenging to get data out of systems that were built decades ago. Part of the problem is trying to get new insights with older technology. As a result, many enterprises are looking to reinvent their data stack.
When these enterprises do have this data available, it can be overwhelming. How do you get insights from new types of data, and how do you unite disparate data sources? As the business grows, analysis and types of questions asked proliferate as well. For example, a product manager will build a dashboard that works well enough for a month. But then as a result of that analysis, stakeholders ask more questions, causing an analyst to build a dashboard on top of the existing dashboard.
With this data proliferation, the complexity continues. Teams want to know that they can trust the data that’s captured. Otherwise, you could have the most robust dashboards and analysis, but leaders will resist the data and continue to make major business decisions based on gut feel. This is compounded by the fact that the data collected is no longer just rows and columns in a CSV. It’s coming from many different formats — JSON, XML, and more. Enterprises need the expertise and the tools that can integrate and capture all of this various data in a trustworthy way.
How to Be On Top of Your Data Stack in 2019
When You Can, Buy over Build
The temptation to build architecture and tools can be hard to resist. After all, building provides the customization and personalization that your organization needs. Plus it’s hard to look at a technical problem and not solve it yourself.
But buying a tool or architecture has a benefit that is often overlooked. It frees up your resources to tackle other issues that impact your business. For example, a 50-person ecommerce startup doesn’t need to focus on building its own data warehouse. At the end of the day, your business is focused on adding value to your customers. Prioritize building what’s unique to your business and will make an impact to your bottom line. For internal functions and backend technologies, see what you can buy to help you get the bandwidth you need focus on what matters to your paying customers. You can always build later.
But once you’ve decided to buy, choose technologies that can easily integrate with each other (for example, Heap, Snowflake, and Looker). That way, you can reduce the amount of silos between your data and get a more complete picture of your data.
Hire Based on Metrics, Not on First Impressions
All businesses seek to hire motivated and highly qualified employees. The question becomes how do you decide whether someone is motivated and highly qualified during the interview process? Results from a survey of 2,000 hiring managers found that 33% knew whether they would hire someone in the first 90 seconds. This can lead to all types of bias and it’s not nearly enough time to learn whether someone is fully qualified for the role. (For an insider view into how Heap hires, check out CTO Dan Robinson’s article about interviewing engineers.)
This issue becomes even more complex when hiring for technical positions. More than ever, you need employees who have experience with many different tools and technologies — Hive, Spark, SQL, and more. And the people you needed when the company was 50 people is not the type of people you need when the company is 5,000 people. Therefore, it’s critical to take a metrics-based approach to hiring. For example, put all of your successful people in your department through an assessment and analyze the data to discover what correlates to success in the role.
Dig Into Your Customers’ Journey
Building better products, providing superior customer service, and troubleshooting operational issues all depend on accurately understanding your organization’s customer journey. Many businesses still look at aggregate, rather superficial data (like time spent on page) to make decisions. However, there should be an ongoing effort to uplevel this strategy.
The Customer Data Maturity Curve helps organizations understand the sophistication of their user behavior analytics. Many enterprises have large data science and data engineering teams, so in some areas can get to higher levels of personalization and join all of their data in a central repository. But in these larger organizations, budding teams are just getting ramped on joining operational and experience data to make decisions.
To move up on this curve, companies must understand and connect many disparate data sources. This includes unionizing operational data product data for transactions, account level, and engagement across silos.
For more data stack trends in 2019, check out the the expert panel from our recent Data-Driven SF event — full of insights and recommendations from thought leaders from Heap, Snowflake and Hearsay.