Do you want to be at the forefront of engineering big data solutions that take transportation models to the next generation? Do you have solid analytical thinking, metrics-driven decision-making, and a desire to solve problems with solutions that will meet the growing worldwide need? We are looking for top-notch Data Engineers to be part of our world-class Transportation Business Intelligence team. We are building real-time analytical platforms using big data tools and AWS technologies like Hadoop, Spark, EMR, SNS, SQS, Lambda, Kinesis Firehose, and DynamoDB Streams.
The ideal candidate relishes working with large volumes of data, enjoys the challenge of highly complex technical contexts, and, above all else, is passionate about data and analytics. He/she is an expert in data modeling, ETL design, and business intelligence tools, and passionately partners with the business to identify strategic opportunities where improvements in data infrastructure create an outsized business impact. He/she is a self-starter, comfortable with ambiguity, able to think big (while paying careful attention to detail), and enjoys working in a fast-paced and global team. It's a big ask, and we're excited to talk to those up to the challenge!
Minimum Requirements:
- 4-7 years of experience performing quantitative analysis, preferably for an Internet company with large, complex data sources.
- Hands-on experience with big data technologies and frameworks: Hive, Spark, Hadoop, SQL on Big Data, Redshift.
- Experience in near real-time analytics.
- Experience with scripting languages (e.g., Python, Perl).
- Experience with ETL, data modeling, and working with large-scale datasets. Extremely proficient in writing performant SQL working with large data volumes.
- Ability to manage competing priorities simultaneously and drive projects to completion.
- Bachelor's degree or higher in a quantitative/technical field (e.g., Computer Science, Statistics, Engineering).
- Experience with large-scale data processing, data structure optimization, and scalability of algorithms is a plus.
- 3+ years of data engineering experience.
- Experience with SQL.
- Experience with data modeling, warehousing, and building ETL pipelines.
- Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions.
- Experience with non-relational databases/data stores (object storage, document or key-value stores, graph databases, column-family databases).