๐ What We Do
- Leveraging our expertise, we build modern Machine Learning systems for demand planning and budget forecasting.
- Developing scalable data infrastructures, we enhance high-level decision-making, tailored to each client.
- Offering comprehensive Data Engineering and custom AI solutions, we optimize cloud-based systems.
- Using Generative AI, we help e-commerce platforms and retailers create higher-quality ads, faster.
- Building deep learning models, we enhance visual recognition and automation for various industries, improving product categorization, quality control, and information retrieval.
- Developing recommendation models, we personalize user experiences in e-commerce, streaming, and digital platforms, driving engagement and conversions.
๐ Our Partnerships
- Amazon Web Services
- Astronomer
- Databricks
๐ Our Values
- ๐ We are Data Nerds
- ๐ค We are Open Team Players
- ๐ We Take Ownership
- ๐ We Have a Positive Mindset
Responsibilities ๐ค
- Orchestration & Integration: Build and manage scalable data pipelines using Apache Airflow, dbt, and Airbyte, ensuring seamless data ingestion and movement.
- Product Development: Work hand in hand with Mixiloโs product team to address real client and internal users' issues, design technical solutions, and help prioritize the roadmap.
- Infrastructure as Code (IaC): Use Terraform to provision and manage cloud resources on AWS, maintaining a secure and cost-effective infrastructure.
- Kubernetes & GitOps: Manage containerized applications and services on Kubernetes (EKS), implementing continuous delivery practices to keep Mixilo running smoothly.
- Enhance DX (Developer Experience): Abstract away complex DAG and dbt logic to reduce manual work for the team and optimize our time to market.
- Data Reliability: Implement rigorous testing frameworksโspecifically leveraging dbt testsโto ensure data quality and catch errors before they impact client recommendations.
- CI/CD for Data: Maintain and improve our CI/CD pipelines to automate testing, deployment, and infrastructure changes.
Required Skills ๐ป
- Senior Data Engineering Foundations: Strong experience in Python and a deep mastery of SQL and Postgres.
- Modern Data Stack: Hands-on experience with Airflow, dbt, and Airbyte.
- Analytical Data Systems: Experience in constructing analytical data systems over Data Lakes (e.g.,AWS S3, Athena, EMR, Glue, Iceberg/Delta, etc.).
- Cloud Mastery: Solid understanding of AWS services (S3, EC2, RDS, IAM, etc.)
- Operations Mindset: A strong understanding of GitOps, CI/CD principles, and a passion for automation.
- Teamwork: Great capacity for collaborative, async, and written communication. You take ownership and follow through on your commitments.
Nice to have skills ๐
- Observability: Experience setting up monitoring and alerting systems to ensure the health of data pipelines and infrastructure.
- Code Hygiene: A sharp sense of code hygiene, including code review, documentation, testing, and CI/CD (Continuous Integration/Continuous Delivery).
- Stream Processing Knowledge: Experience with stream processing tools like Kafka Streams, Kinesis, or Spark.
- Python's Scientific Stack: Proficiency in Python's Scientific Stack, including numpy, pandas, jupyter, matplotlib, scikit-learn, and related tools.
- Infrastructure & Containers: Proficiency with Terraform, Docker, and Kubernetes.
- English Proficiency: Solid command of the English language for writing technical documents, such as Design Documents.
๐ Perks
- Remote-first culture โ work from anywhere! ๐
- AWS, DBT, Google Cloud, Azure & Databricks certifications fully covered
- In-Company English Lessons.
- Birthday off + an extra vacation week (Mutt Week! ๐๏ธ)
- Referral bonuses โ help us grow the team & get rewarded!
- Maslow: Monthly credits to spend in our benefits marketplace.
- โ๏ธ๐๏ธ Annual Mutters' Trip โ an unforgettable getaway with the team!