This content, written by Fabio Beltramini, was initially posted in Looker Blog on Oct 31, 2019. The content is subject to limited support.
Editor’s note: Carl Anderson will be sharing his thoughts around open-source tooling at JOIN’s session on Wednesday, 11/6, at 4:15 pm. Don’t have a ticket to JOIN yet? to our team to get your ticket to take part in the conversation.
Everyone has opinions about how to build a great data experience for end-users.
But how do you boost the experience for the developers and analysts building reports for a business?
Carl Anderson has some answers to that question.
Carl is the Senior Director of Data Science at WW International (and author of “”) and has been an advocate of the developer experience at Looker for years. Carl’s new suite of LookML modeling enhancements (called ) boosts the developer experience by providing new ways to enforce consistency for developers, understand relationships among LookML files, and keep LookML up to date.
We sat down with Carl to talk about his approach to making Looker successful for companies at scale, how to set LookML developers up for success, and the value of open-source development on top of a data platform.
Looker: Carl, you’ve been part of deploying Looker at Warby Parker, WeWork, and now at WW International (formerly Weight Watchers). What are some challenges you faced as you were scaling a data culture to hundreds of people?
The first challenge is onboarding — dedicating time to get new users up and running.
Once people start digging in, the challenge becomes managing their requests. We get loads of requests for additional dimensions and measures: tweak this, rename that, and so on. We started having sessions on Friday afternoon to decide which requests to accept or reject.
Sometimes we’d have to be strict and say no. Or we’d recommend a workaround that might be challenging to execute.
What are some of the most important things that a data admin should be looking for when delivering a great data experience for their end-users?
A team might have a lot of existing dashboards that are informational but don’t drive decisions.
I would sit down with each team and ask them what they wanted to have control over or what decision they wanted a dashboard to help inform. I’d then work backward from there.
Typically that process resulted in a small, discrete set of dimensions, measurements, and models behind the dashboards that enabled them to make data-driven decisions.
So it's really about focusing the discussion toward actionable decisions?
Yes, especially if you inherit a legacy reporting system. We had one last year that had around 2000 reports, many of which were one-offs. Others could be confusing; it was hard to figure out how often they were run or if they were still valuable.
Switching from a legacy system to Looker often provides a fresh slate to determine what truly matters.
Once we had answers to how a team works, what levers they had, and what their KPIs were, we knew what to focus on — how we could help a team deliver and improve their KPIs.
What aspects of Looker are particularly suited to handle scale for developers? What did you see we were missing for those aspects?
When you're developing , you want to adopt best software practices, such as keeping your code DRY (Don’t Repeat Yourself), using inheritance, having baselines that you extend, and so on.
One of the drivers for starting on our own LookML linter was to establish some basic coding standards to tame the codebase and let the structure shine through.
Enforcing naming conventions such as where things go, where to find them, what they're called, etc. lets people who may be new to the organization — or to this repository — dig in.
Did you create the toolkit to enforce that methodology of DRY code standards for developers?
We have a set of LookML developers who are used to software or BI engineering. We also have analysts that know SQL very well but aren’t software developers.
Creating a linter gave them all guidance and best practices to use. Getting everyone to follow the same coding standards provides immense value.
We basically ate our own dog food first to make it easier for an analyst to write clean LookML code.
Did these rules come from your experience in software development? Or did some of these rules come from an awareness of what was cluttering up Looker instances for those in a different role?
It's a mix. Some, like Describe all fields surfaced for users, are important regardless of what you’re doing. It's critical for users to know the specifics of what something is. That knowledge helps to avoid confusion.
Rules like the Drill Downs, are meant to enhance Looker as a data discovery tool. There's always going to be one or more dimensions you should be able to drill down to for any given measure. We want to make sure there are obvious places to go once a user opens up a field.
Others like the Name Rules are more about developer experience than the end-user experience.
And some rules help smooth experiences for both developers and end-users. Don’t SHOUT! adds consistency to the UI, the end-user, and the developer.
Last but not least, Hello Members is important because we changed our internal language around how we talk about members and subscriptions and so on. We want to enforce that rule because it helps ground Looker as a source of truth.
What do you see as the particular value of your lookml-tools being open-source (rather than native features within Looker)?
Great question. Some of these tools — and not just lookml-tools — may be open-source simply because Looker hasn’t built them natively. For instance, an API endpoint that I could post some LookML to and receive e a “yes, this is valid” or “no, it isn't.” response would be helpful.
Open-source tools signal the start of a thriving community where people recognize a need and share their solutions, which is great.
With regards to the contribution to lookml-tools, what contributions to this project would be most helpful for you from the community?
I would be happy if people used them!
No one's going to have exactly the same set of linter rules that I have. They’re going to add their own classes of rules, which has a multiplier effect with other members of the community contributing different types of rules. I can imagine that there are other major types of rules that people are looking for in a linter.
LookML has so much conditional logic. I worry that there's some edge case I haven't thought about and the parser or whatever is going to fall over. Having a richer test suite or a corpus of valid open-source LookML files that cover the breadth of the look functionality would be handy for everyone.
I also think that while there’s a certain value of the native support for a feature. There’s a certain tangible value to open-sourcing a project because it communicates that you're not necessarily locked into a certain way of doing it.
This might be a good time to mention what happened yesterday.
Josh Temple, a Senior Data Engineer at Milk Bar, reached out to me with an and asked if I was interested in trying it out. He’d seen my post about lookml-tools.
So I spent my morning seeing whether I can incorporate that as a parser into my Python project and get my test suite to pass. After a couple of hours, I had like 90% of my test suite passing.
It’s great that someone else created a Python parser and opened it up to the community. And it helps me out because I can manage the dependencies and make it easy to install and run my code.
I think it speaks to the community that through Discourse or some other channel, a person found out about this and essentially offered a contribution to this project to make it even better — not just for me, but for everyone using this tool now.
(Update: the latest LookMLl-tools now uses Josh’s Python parser exclusively.)
This is an edited and condensed version of this interview.