Responsibilities
Design, build, and maintain scalable data infrastructure to support analytics and reporting across the organization.
Develop and operate ETL pipelines to ingest, transform, and deliver large-scale datasets.
Work with distributed data processing frameworks such as Spark, Hive, or similar MPP architectures.
Use SQL and data modeling techniques to structure and optimize datasets for analytics use cases.
Process and analyze large volumes of structured and semi-structured data using tools such as Spark and Presto.
Write production-quality code using Python, Java, Scala, or Go.
Ensure data reliability and availability, operating and monitoring hundreds of ETL pipelines with strict SLAs.
Investigate and resolve complex data issues, including root-cause analysis of pipeline failures or data inconsistencies.
Partner closely with Data Analysts and cross-functional stakeholders to provide reliable datasets and guide them in using data effectively.
Troubleshoot data issues in dashboarding tools (e.g., Tableau, Power BI, MicroStrategy) and propose solutions.
Qualifications and Job Requirements
5+ years of experience in Data Engineering, building and maintaining data infrastructure and pipelines.
Strong expertise in SQL, including joins, aggregations, unions, and window functions.
Hands-on experience with data modeling and schema design for analytical systems.
Experience building ETL pipelines using Airflow or similar orchestration tools.
Experience with Big Data ecosystems, including Hadoop, Hive, Spark, or related technologies.
Programming experience in Python, Java, Scala, or Go.
Familiarity with UNIX/Linux environments and shell scripting.
Understanding of software engineering best practices, including testing, monitoring, and documentation.
Strong collaboration and communication skills when working with analysts and cross-functional stakeholders.
Ability to troubleshoot and resolve data issues across pipelines and BI tools.
Nice to Have
Degree in Computer Science or a related field.
Experience working in fast-paced, high-growth technology companies.
Familiarity with real-time data ingestion frameworks such as Kafka or Flume.
Experience supporting data science or advanced analytics teams.
Knowledge of industry best practices for large-scale ETL and data platform architecture.
Strong interest in data science and emerging data technologies.
What We Offer
100% Remote Work: Enjoy the freedom to work from the location that helps you thrive. All it takes is a laptop and a reliable internet connection.
Highly Competitive USD Pay: Earn an excellent, market-leading compensation in USD, that goes beyond typical market offerings.
Paid Time Off: We value your well-being. Our paid time off policies ensure you have the chance to unwind and recharge when needed.
Work with Autonomy: Enjoy the freedom to manage your time as long as the work gets done. Focus on results, not the clock.
Work with Top American Companies: Grow your expertise working on innovative, high-impact projects with Industry-Leading U.S. Companies.
Why You’ll Like Working Here
A Culture That Values You: We prioritize well-being and work-life balance, offering engagement activities and fostering dynamic teams to ensure you thrive both personally and professionally.
Diverse, Global Network: Connect with over 600 professionals in 25+ countries, expand your network, and collaborate with a multicultural team from Latin America.
Team Up with Skilled Professionals: Join forces with senior talent. All of our team members are seasoned experts, ensuring you're working with the best in your field.