Staff Data Engineer

Company:  RevenueCat
Location: San Francisco
Closing Date: 03/11/2024
Salary: £150 - £200 Per Annum
Hours: Full Time
Type: Permanent
Job Requirements / Description

(Full Time) Staff Data Engineer at RevenueCat (United States)

Staff Data Engineer

RevenueCat United States

Date Posted: 31 Oct, 2022

Work Location: San Francisco, United States

Salary Offered: $218000 — $245000 yearly

Job Type: Full Time

Experience Required: 11+ years

Remote Work: Yes

Stock Options: No

Vacancies: 1 available

About us:

RevenueCat makes building, analyzing and growing mobile subscriptions easy. We launched as part of Y Combinator's summer 2018 batch and today are handling more than $1.2B of in-app purchases annually across thousands of apps.

We are a mission driven, remote-first company that is building the standard for mobile subscription infrastructure. Top apps like VSCO, Notion, and ClassDojo count on RevenueCat to power their subscriptions at scale.

Our 50 team members (and growing!) are located all over the world, from San Francisco to Madrid to Taipei. We're a close-knit, product-driven team, and we strive to live our core values: Customer Obsession, Always Be Shipping, Own It, and Balance.

We’re looking for a Staff Data Engineer to join our newly formed data engineering team. As a Staff Engineer, you will be responsible for leading the effort to design, architect and support our entire data platform and will play a key role in defining how our systems evolve as we scale.

About you:

  • You have 8+ years of software engineering experience.
  • You have 5+ years of experience working with and building enterprise-scale data platforms.
  • You have excellent command of at least one of the mainstream programming languages and some experience with Python.
  • You have helped define the architecture, data modeling, tooling, and strategy for a large-scale data processing system, data lakes or warehouses.
  • You have used workflow management tools (eg: airflow, glue) and have experience maintaining the infrastructure that supports these.
  • You have hands-on experience building CDC-based (Change Data Capture) ingestion pipelines for highly transactional databases. Experience with Postgres and logical replication is a plus.
  • You have a strong understanding of modern data processing paradigms and tooling, OLTP & OLAP database fundamentals.
  • Dimensional modeling and reporting tools like Looker are a plus, but not required.
  • You have experience building streaming/real-time data pipelines from a batch architecture approach.

Responsibilities:

  • Help define a long-term vision for the Data Platform architecture and implement new technologies to help us scale our platform over time.
  • Help the team apply software engineering best practices to our data pipelines (testing, data quality, etc).
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources, using SQL and AWS technologies.
  • Clearly define data ownership & responsibility, audit and compliance framework, and general security of the data lake.
  • Partner with product managers, data scientists, and engineers across teams to solve problems that require data.
  • Drive the evolution of our data platform to support our data processing needs and provide frameworks and services for operating on the data.
  • Analyze, debug and maintain critical data pipelines.
  • Work with our core infrastructure team to create and improve frameworks that allow derived data to be used in production environments.
  • Contribute to standards that improve developer workflows, recommend best practices, and help mentor junior engineers on the team to grow their technical expertise.

In the first month, you'll:

  • Get up to speed on our architecture and learn the problem domain.
  • Understand our current data requirements and where things stand today.
  • Gain understanding of our current data pipelines.

Within the first 3 months, you'll:

  • Work with your team to help design and architect our data platform.
  • Work with product managers, engineers and data scientists to help come up with a plan to gain consensus on the approach.
  • Analyze, debug and maintain critical data pipelines.

Within the first 6 months, you'll:

  • Develop thorough understanding of our data platform.
  • Know all the major components of our system and be able to debug complex issues.
  • Be able detect bottlenecks, profile, and come up with enhancements.
  • Start participating in hiring for the company.

Within the first 12 months, you'll:

  • Thoroughly understand our data processing needs and able to spec, architect, and build solutions accordingly.
  • Mentor other engineers joining the team.

What we offer:

  • $218,000 to $245,000 USD salary regardless of your location.
  • Competitive equity in a fast-growing, Series B startup backed by top tier investors including Y Combinator.
  • 10 year window to exercise vested equity options.
  • Fully remote work environment that promotes autonomy and flexibility.
  • Suggested 4 to 5 weeks time off to recharge and focus on mental, physical, and emotional health.
  • $2,000 USD to build your personal workspace.
  • $1,000 USD annual stipend for your continuous learning and growth.

About RevenueCat

A simple API for managing in-app subscriptions

Company Size: 51 - 250 People

Year Founded: 2017

Country: United States

Company Status: Actively Hiring

#J-18808-Ljbffr
Apply Now
Share this job
RevenueCat
  • Similar Jobs

  • Staff Data Engineer

    San Francisco
    View Job
  • Staff Data Engineer

    San Francisco
    View Job
  • Staff Data Engineer

    San Francisco
    View Job
  • Staff Data Engineer

    San Francisco
    View Job
  • Staff Data Engineer

    San Francisco
    View Job
An error has occurred. This application may no longer respond until reloaded. Reload 🗙