Description
Who We Are:
Dedicated to making a difference in law enforcement agencies across the U.S., our mission is to transform policing by elevating officer performance with a preventative-based early intervention system. Driven by data science and powered by machine learning, our offering analyzes officer performance data in order to identify potentially problematic behavior. In partnership with the University of Chicago, we’ve developed the world’s largest multi-jurisdictional officer performance database, and the only research-driven, evidence-based early intervention system available in policing today.
We’re also the only provider of a fully integrated, cloud-based Software-as-a-Service (SaaS) platform that simplifies essential policing workflows. This platform is designed to be a single-source solution for all operational needs, driving extensive efficiency gains and providing best-in-class advanced analytics and insights.
Benchmark Analytics provides a comprehensive, all-in-one solution that is advancing police force management through state-of-the-art technology and market-leading data and analytics.
Responsibilities
- Designing, developing, and maintaining complex data pipelines, ETL processes, and data integration solutions
- Leading development discussions and designing patterns considering different languages, tools and frameworks.
- Playing a lead role in team workshops, refinement sessions, and development paths for data engineering
- Assessing and analyzing current/legacy data processes, identifying inefficiencies, and suggesting improvements.
- Collaborating with data scientists and analysts to understand data requirements, and ensure the availability of clean, accurate, and reliable data for ML and reporting.
- Collaborating with team members across the organization to improve product data quality, data pipeline efficiency and data platform performance/monitoring.
- Supporting documentation efforts as necessary to prepare the team and company for growth/scale.
- Playing a supporting role in educating the data engineering team members on best practices, relevant knowledge, and specific skills
Job Qualifications:
- Bachelor’s degree in STEM field or equivalent (e.g. Computer science, Engineering, etc.)
- Experience in data engineering, data analysis and data integration:
- Building data pipelines in ETL platforms (e.g. CloverDX or equivalent)
- Applying data extraction, transformation and loading techniques in order to connect large data sets from a variety of sources.
- Evaluating legacy systems for opportunities to migrate to modern data processing frameworks.
- Ability to translate operational SQL scripts and migrate to an ETL layer.
- Experience troubleshooting problems throughout multiple data systems.
- Experience writing queries, analyzing data, and data-centric problem solving:
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Strong technical aptitude and technical acquisition skills (ability to work with and develop skills in technical products)
- Has a strong sense of process (ability to understand how steps relate to each other to achieve end results)
- Strong analytic skills related to working with unstructured datasets.
- Capacity to translate business requirements into technical solutions.
- Knowledge of DevOps practices (e.g. CI/CD pipelines, infrastructure as code)
- Has the ability to successfully manage multiple concurrent tasks/projects and meet deadlines.
- Works and communicates well with others:
- Has empathy for colleagues and customers
- Able to receive and respond positively to feedback.
- Able to work independently with reasonable guidance from management (except in complex and non-routine situations)
- Able to effectively communicate across multiple levels (team members, managers) in a fast-paced environment.
- 5+ years experience in architecting, designing, and developing scalable data solutions.
- 5+ years experience in building and maintaining ETL/ELT pipelines (batching and/or streaming) that are optimized and can handle various sources structured or unstructured. (CloverDX or equivalent)
- 3+ years experience in Java, Python, Unix Shell scripting, data driven job schedulers.
- 2+ years experience in cloud-based platforms (AWS) and containerization technologies. (Docker, Kubernetes)
- Proficiency in various data modeling techniques, such as ER, Hierarchical, Relational, or NoSQL modeling and model governance.
- Excellent design and development experience with SQL and NoSQL database, OLTP and OLAP databases.
- Technologies: SQL, Python, Java, AWS products, CloverDX, Postgres, Spark/Hive, GitLab, Git, Docker, Kubernetes, Django
Position does not have direct reports but is expected to assist in guiding and mentoring less experienced staff.
What We Offer
- A competitive salary and benefits package.
- Unlimited paid time off (PTO)
- Ability to work in a fully remote environment (must be based in the U.S. and willing to work in Central Time Zone).
- Medical, dental, and vision plan offerings along with 401(k).
- Employer-paid Short-Term Disability, Long-Term Disability, and Life Insurance.
- Other Voluntary Benefits include additional Life Insurance, Spouse Life Insurance, and Accident Insurance.
- The satisfaction that comes with being part of a solution that has real impact in the world.
- A diverse workforce and inclusive environment that embraces unique contributions and experiences.
- An empowered culture that encourages creativity and professional growth.
- Benchmark Analytics is an Equal Opportunity Employer. We value diversity of all kinds in our effort to create a stellar workforce of committed and passionate team members
- Unfortunately, we are not able to sponsor employment visas at this time, so we can only accept applications from candidates who are authorized to work in the U.S.
- If interested, please email your resume to