The Speed Advantage: Why Velocity Outpaces Over-engineering with Ryan Delgado, Ramp | Crunching Data

Ramp’s Ryan Delgado shares why simplicity wins in data platforms, how AI accelerates engineering, and what’s next for BI and warehouses.
Sarah Lubeck

Ryan Delgado has a counterintuitive rule for building data platforms: wait as long as possible.

On this episode of Crunching Data, Ryan, who leads the data platform team at Ramp, shares why velocity trumps perfection, how Ramp became one of the country's biggest users of AI coding tools, and why the next five years will completely reshape the data warehouse landscape.

If you're building or scaling a data platform in 2025, this conversation is required listening.

Here are some areas we dive in:

Don't build the platform before you understand the problem

Ryan learned this lesson the hard way. Early at Ramp, he focused too much on building robust platforms before fully understanding what problems needed solving.

"A mistake that I made actually early on at Ramp is I put too much emphasis on the platforms and building that first before thinking about the problems. And what happened is that I built platforms that were difficult to use or they weren't useful."

His advice? Start with the most naive solution possible. Use cron jobs before Airflow. Stick with Postgres before jumping to Snowflake. The platform should emerge naturally from solving real problems, not the other way around.

Hire for slope, not resume

When Ryan evaluates candidates, he's not looking for well-rounded engineers. He's looking for spikes, specific strengths that match urgent needs.

"We try to hire for potential. Where could they be in let's say one or two or three years down the line rather than where are they currently?"

At Ramp, behavioral interviews aren't run by HR. They're conducted by cross-functional stakeholders, usually product managers or engineers from other teams. It's a vibe check for collaboration and hustle, two qualities that matter more than pedigree at a fast-moving startup.

AI helps Ramp maintain velocity at scale

As Ramp grew to roughly 300 engineers, they needed a force multiplier. They found it in AI coding tools,  and went all-in. Ramp is now one of the biggest users of Cursor in the country, with LLMs generating at least half of the code written daily across the company.

"95% of [my code] is written by an LLM. More often than not, I'm writing a prompt for Claude Code on what the problem is, how to evaluate the problem, other details it may need."

But Ramp isn't just consuming AI tools,  they're building them. A small team dedicated to DevEx with AI experiments with existing tools and builds proprietary plugins to accelerate development. They're even combining models like Claude Code with Gemini to leverage the strengths of both.

Ramp's tech stack reflects their speed-first philosophy

Ryan's team uses a pragmatic mix of tools:

  • Snowflake as the data warehouse bedrock
  • Airflow for orchestration
  • dbt for transformation
  • Matia for ETL from third-party sources and internal databases.  More about Ramp's Matia use case here.
  • ClickHouse for online analytics (and surprisingly, as a vector database)
  • Looker for BI, with Hex for more exploratory analytics

Each tool was chosen to solve a specific problem, not to impress with capabilities. And when they needed faster API responses, they didn't over-optimize Postgres,  they moved to ClickHouse.

Context beats access every time

What separates a good data team from a great one? According to Ryan, it's not the platform,  it's proximity to problems.

"I think what separates a data team from a great data team is closeness to the problems and the stakeholders and the partnership that you use in order to arrive at a solution for those problems."

His team runs a help desk in Slack, and Ryan himself jumps in to answer questions when he's on call. That partnership attitude keeps the team aligned with real needs, not theoretical ones.

The next five years will disrupt BI and data warehouses

Ryan doesn't hold back on his "controversial" opinions. He sees two major disruptions coming:

1. BI tools are ripe for reinvention
Text-to-SQL powered by LLMs changes the game for non-technical stakeholders. The question is whether incumbents like Looker can adapt fast enough.

"With the Claude desktop app, if I have a CSV file I want to analyze locally, I can just type my questions in natural language and it can automatically generate reports for me that answer the question and otherwise visualize the data."

2. Open table formats will reshape the data warehouse

Ryan is bullish on Iceberg and believes it could unlock a new category of data warehouse,  one where compute lives on local workstations instead of in the cloud.

"If you could build ACLs on top of these Iceberg tables, you basically have a working data warehouse, but the compute doesn't live in the cloud, lives on my local workstation. That opens up an opportunity for startups to use the data warehouse very quickly, but also very cheaply."


That's appealing for consumers,  and potentially threatening for incumbents like Snowflake and Databricks, who are racing to acquire companies like Tabular for exactly this reason.

Watch the podcast here

Watch Other Episodes of Crunching Data