About Arbor
Arbor is building an intelligent electricity marketplace to power an abundant electric future. As AI data centers drive a surge in electricity demand, millions of homes and businesses remain trapped with 20th century monopoly interfaces, overpaying by billions while cheap renewable energy goes to waste. Our AI-powered platform aggregates users to unlock wholesale pricing, automatically switches between the best rates, and optimizes usage to align with prices in real-time—delivering the same market advantages that were previously exclusive to Fortune 500 companies. We've raised over $20 million from leading investors and have saved over $12 million for more than 100,000 homes and businesses.
The Opportunity We're looking for a Data / Analytics Engineer to own the data infrastructure that powers Arbor's intelligence layer. You'll be the connective tissue between our production systems and the insights that drive pricing decisions, marketplace performance, and customer outcomes. Our data stack is modern, consolidated, and still maturing — which means you'll have real ownership over how it evolves, not just tickets to close.
This is a high-impact, high-ownership role on a lean team. We move fast, and we expect you to bring the same AI-first development mindset we apply across the entire engineering organization.
What You'll Do
Build and maintain our data pipeline infrastructure, spanning ingestion through Fivetran and custom pipelines from our GCP production systems into Snowflake. You'll own our dbt transformation layer end-to-end — modeling energy market data, customer lifecycle events, marketplace results, and utility rate feeds into clean, reliable, and well-documented data assets.
You'll partner closely with engineering, operations, and leadership to deliver analytics that directly inform business decisions — rate monitoring, supplier pricing trends, customer switching patterns, and marketplace performance. You'll work in Hex to build and maintain dashboards that surface actionable intelligence for non-technical stakeholders, and you'll help define data contracts and schema standards that make our Snowflake environment trustworthy as we scale.
You'll also play a meaningful role in shaping how we use AI in our data workflows — whether that's automating data quality monitoring, accelerating development with AI-assisted SQL and Python, or surfacing anomalies in complex regulatory data feeds.
What You Bring
3–6+ years of experience in a data engineering or analytics engineering role, ideally at a high-growth startup or in a domain involving complex, real-time data (energy, fintech, marketplace, or similar).
Strong dbt fundamentals — you know how to design models that are maintainable, not just functional. You think about testing, documentation, and downstream consumers as part of the job, not afterthoughts. Solid SQL is a given; Python for pipeline development or data quality tooling is a plus.
Hands-on experience with Snowflake, including schema design, query performance, and understanding the cost/performance tradeoffs of how data is structured and accessed. Comfort with GCP data services (BigQuery, Cloud Storage, Pub/Sub) is a plus given our infrastructure footprint.
Experience building dashboards for business stakeholders in tools like Hex, Looker, or similar. You know the difference between a dashboard that gets used and one that doesn't.
Energy markets or regulated industry experience is not required, but genuine curiosity about how electricity pricing, competitive markets, and the grid work will make this role more interesting to you.
An AI-native approach to your own productivity — you're actively using AI tools to accelerate development and aren't waiting for someone to tell you to start.
Compensation & Work Setup Competitive salary + meaningful equity + benefits. We're flexible on location but value regular in-person collaboration with the team.