North American Bancard: An Active Metadata Pioneer

[ad_1]

Governing Snowflake and Supercharging Sigma with Atlan

The Active Metadata Pioneers series features Atlan customers who have recently completed a thorough evaluation of the Active Metadata Management market. Paying forward what you’ve learned to the next data leader is the true spirit of the Atlan community! So they’re here to share their hard-earned perspective on an evolving market, what makes up their modern data stack, innovative use cases for metadata, and more.

In this installment of the series, we meet Daniel Dowdy, Director, Big Data Analytics at North American Bancard. Daniel shares his organization’s journey toward centralizing data in Snowflake and exposing it in Sigma, and how Atlan will play a key role in both advancing their data governance strategy, and reducing the effort their analysts and engineers spend finding, understanding, and applying data.

This interview has been edited for brevity and clarity.


Could you tell us a bit about yourself, your background, and what drew you to Data & Analytics?

It’s a bit of a story to get there and for me, it wasn’t a direct path. I’ve always been a procedural and analytical person with a passion for problem-solving and helping people. I started out by serving in the Marine Corps, and I think that helped enhance those attributes while adding a ton of leadership skills.

After the Marine Corps was when I decided to focus my career on Finance. So, a little over 12 years ago I joined the finance team here at North American Bancard. After advancing to some leadership roles, I ended up overseeing the technical consultants that we had for our accounting software, and I was way more interested in being able to go under the hood, so to speak, and extract data rather than using the GUI in the software.

So from there, things kind of took off. I took some software engineering courses, and I had the opportunity to stand up the Business Planning and Analysis team in our operations organization. We ended up being a lot more than that as we started centralizing reports and KPIs and really developing a business intelligence and advanced analytics roadmap. This led me to move into the IT organization and manage the Data Science and Reporting team. 

The success we had there, building our next gen data warehouse via Snowflake and enabling self-service analytics across the organization using real time data streams, led me into my current role. It wasn’t a clear or direct path where I knew that I was going to get into data and analytics from the start, but I’m happy to be here. And with how everything’s evolved over the last decade in data-centric roles, I’m more excited than ever to be in the data and analytics world.

Would you mind describing North American Bancard, and how your data team supports the organization?

North American Bancard is the sixth-largest independent acquirer in the nation and they help merchants process about $45 billion annually. For the last 20-plus years, NAB has been focused on making a platform that’s as easy as possible for merchants to grow their business on through innovations and credit card processing, e-commerce, mobile payments, and really a whole lot more.

When we talk about the data team specifically, NAB Holdings has a core data team with engineers, analysts, administrators, and data scientists. Several other departments in our organization, in addition to many of our other subsidiary companies, have their own data teams with whom we collaborate with to create a very robust data ecosystem. 

One of the best things about our data team is we never get stuck in the, “This is how it’s always been done,” mindset. Everyone on our team is always looking for the next way to innovate and improve, and we’re always evaluating new technology and looking for the best way to do things versus the way it’s always been done. I am incredibly grateful to have the opportunity to work with an amazing data team. Their collaboration and support as we constantly evolve and innovate towards building future systems is truly exciting.

Could you describe your data stack?

From a high-level, we have a multi-cloud approach, leveraging services across various cloud providers, spanning multiple regions. We have a wide variety of data sources, and almost every database type you can think of. We have centralized most of this into Snowflake. And a large portion of what lands into Snowflake is synced via CDC and various tools and technology we use to get it there. 

We utilize a combination of modern technologies for data replication and streaming alongside our ETL/ELT solutions and processes. Once centralized into Snowflake and transformed to create our data warehouse and data marts, we primarily use Sigma as our BI layer. Over the last couple of years, the Sigma and Snowflake combination has been a pivotal point in the evolution of our tech stack.

We were once at a roadblock, where we had such a variety of data sources across multiple servers, and with the data sizes that we had, queries that would take 30 hours to run, then would often fail when trying to do an analysis. Since we migrated to Snowflake, we’re getting those same results in 30 seconds or less. So, it took us from this “data desert” environment to an oasis of information, in many aspects.

That, in turn, increased the volume of the requests coming in. A lot more people could now get a lot more information, and they wanted it quickly, so we had to develop an environment that promoted self-service analytics that put the data at the fingertips of the analysts versus going through us in a request system to extract it for them. That’s where Sigma came into our tech stack.

Their Excel-like interface allowed for an immediate adoption of the tool, and we were able to expose reporting data and allow those analysts to explore. Then, they could answer 20 questions they might come up with in just minutes, versus days of back-and-forth they once spent working through a ticketing system.

We’ve got a very wide range of technology, but our focus is centralizing in Snowflake and allowing it to be consumable within Sigma.

What prompted your search for an Active Metadata Management platform? What stood out about Atlan?

We wanted a really solid data governance solution, and we wanted the ability to create a robust data glossary. Those are the main features we were looking for.

When we were doing the evaluation, we saw that other tools could do that. But when it came to Atlan, you could do those things, but you could also do all of these other things that we weren’t necessarily looking for but we really needed.

The Chrome Plug-in was huge for creating that seamless integration with Sigma. We have hundreds of Sigma users, and it was important to give them an enhanced experience where they can see more information, or submit Jira tickets directly in a dashboard, without having to navigate away from it. Not only that, the Jira ticket then tags the dashboard for our analysts to work more quickly on resolving issues.

For Sigma, it’s going to increase adoption, but it also gives us the ability to increase the scope of who we’re going to allow into that environment. We’ve still remained pretty limited on who we offer Sigma to. Now that we have the ability to see the lineage of all these reports and exactly what’s going into the system, and we’re able to have more controls, we’re more comfortable expanding out who we’re going to allow into that environment. And on top of that, user experience is going to be that much better with this enhancement.

The Sigma integration is the primary use case that was a hard requirement. We needed something that integrated with Sigma, and yours was, out of everyone we went through a proof of concept with, the best in class. We evaluated another solution earlier this year and they said, “Oh yes, we can eventually.” Well, we can’t buy something to eventually work with what we need now. You were spot-on with it.

Then there were the cost optimization functions in Snowflake, the personas, and the ability to tag items for governance purposes. It had so many extra layers that we didn’t even have in our requirements that just made it the clear tool.

And I have to say, the salespeople and the sales engineer we worked with were just absolutely amazing. They were very helpful, and I definitely can’t shout out enough to them.

What do you intend on creating with Atlan? Do you have an idea of what use cases you’ll build, and the value you’ll drive?

A lot of what we’re doing is about enhancing security. Even though we have really good security policies, our thought is, “How can we make it better?” How can we look for things that should be masked, then tag them properly? How can we identify new objects being added that might be sensitive? Security is always top-of-mind to reduce our risk and exposure.

Outside of that, everything our end-user analysts do in Sigma is going to be that much faster when they’re able to see these definitions, and able to see these past comments, tickets, and discussions around the data that they’re actively working on.

The ROI that we’re going to see from the efficiency gains, from the end user analyst all the way to the engineer that might be trying to fix some report that they’re saying is broken, I think those are the biggest value drivers. 

Beyond that is just building a robust data glossary and dictionary, which will help the organization, as a whole, in creating consistent metrics and reporting solutions.

Photo by rupixen.com on Unsplash

[ad_2]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top