Back to Jobs

Data Engineer 1

TrulyRemote Verified

Hand-curated global remote job with direct application link

Technical Requirements

PythonApache SparkSQLAWSDatabricksAirflow

About Us

People Data Labs (PDL) is the provider of people and company data. We do the heavy lifting of data collection and standardization so our customers can focus on building and scaling innovative, compliant data solutions. Our sole focus is on building the best data available by integrating thousands of compliantly sourced datasets into a single, developer-friendly source of truth. Leading companies across the world use PDL’s workforce data to enrich recruiting platforms, power AI models, create custom audiences, and more.

We are looking for someone early in their engineering career who is excited to learn what it takes to support a data-as-as-service (DaaS) business. Our customers are trying to solve complex problems, and we only help them achieve their goals as a team. Our Data Engineering Team is the secret sauce behind all that we do and we look for the best of the best. This would be an exceptionally meaningful team to learn from at the start of your career.

If you are looking to be part of a team discovering the next frontier of DaaS with a high level of mentorship and opportunity for direct contributions, this might be the role for you. We like our engineers to be thoughtful, quirky, and willing to learn new things.

What You Get to Do

  • Build infrastructure for ingestion, transformation, and loading an exponentially increasing volume of data from a variety of sources using Spark, SQL, AWS, and Databricks

  • Building an organic entity resolution framework capable of correctly merging hundreds of billions of individual entities into a number of clean, consumable datasets.

  • Developing CI/CD pipelines and anomaly detection systems capable of continuously improving the quality of data we're pushing into production.

  • Dreaming up solutions to largely undefined data engineering and data science problems.

The Technical Chops You’ll Need

  • 1-2+ years of industry experience with clear examples of strategic technical problem-solving and implementation

  • Strong software development fundamentals

  • We do not expect experience in all of the following areas, but our more tenured members of the team tend to have:

    • Experience with Python

    • Expertise with Apache Spark (Java, Scala, and/or Python-based)

    • Experience with SQL

    • Experience building scalable data processing systems (e.g., cleaning, transformation) from the ground up.

    • Experience using developer-oriented data pipeline and workflow orchestration (e.g., Airflow (preferred), dbt, dagster or similar)

    • Knowledge of modern data design and storage patterns (e.g., incremental updating, partitioning and segmentation, rebuilds and backfills)

    • Experience working in Databricks (including delta live tables, data lakehouse patterns, etc.)

    • Experience with cloud computing services (AWS (preferred), GCP, Azure or similar)

    • Experience with data warehousing (e.g., Databricks, Snowflake, Redshift, BigQuery, or similar)

    • Understanding of modern data storage formats and tools (e.g., parquet, ORC, Avro, Delta Lake)

People Thrive Here Who Can

  • Balance high ownership and autonomy with a strong ability to collaborate

  • Work effectively remotely (able to be proactive about managing blockers, proactive on reaching out and asking questions, and participating in team activities)

  • Demonstrate strong written communication skills on Slack/Chat and in documents

  • Exhibit experience in writing data design docs (pipeline design, dataflow, schema design)

  • Scope and breakdown projects, communicate and collaborate progress and blockers effectively with your manager, team, and stakeholders

Some Nice To Haves

  • Degree in a quantitative discipline such as computer science, mathematics, statistics, or engineering

  • Experience working with entity data (entity resolution / record linkage)

  • Experience working with data acquisition / data integration

  • Expertise with Python and the Python data stack (e.g., numpy, pandas)

  • Experience with streaming platforms (e.g., Kafka)

  • Experience evaluating data quality and maintaining consistently high data standards across new feature releases (e.g., consistency, accuracy, validity, completeness)