Description
At Forager.ai, we deliver premier workforce data encompassing people, contacts, organizations, jobs, and intent signals. Renowned for providing the most up-to-date and accurate lead generation data on the market, our solutions empower cutting-edge recruiting and sales platforms, AI-driven models, custom audience creation, and much more. With seamless delivery through APIs, data feeds, and CRM integrations, Forager.ai ensures our customers access the data they need, when and how they need it.
Why Join Us Now
There’s never been a better time to join Forager.ai. We’re experiencing rapid growth, driven by the increasing demand for our high-quality data solutions. To keep pace, we’re enhancing the way we deliver value to our customers by developing new features, integrations, and scalable infrastructure. We’re seeking a Senior Full Stack Engineer to help drive the evolution of our platform—building robust web applications, APIs, and data systems that power our products. Be part of an exciting phase of innovation and help launch groundbreaking data products into their next chapter of success.
This role is a global opportunity that requires a 4 hour overlap with US Mountain Time
What You'll Build
Build and operate the systems that deliver Forager.ai's people and organization data to platform customers at scale — customer-facing apps, large-scale data pipelines, and search infrastructure. These are the surfaces we win competitive bakeoffs on and retain customers through, the foundation of our "Data Quality Championship Belt."
- Real-time enrichment APIs — person/org lookup, contact data, reverse search for waterfall platforms. Match rate, latency, and freshness drive renewals.
- Bulk data feed delivery — maintain the Snowflake service delivering billions of data points daily to Data Feed customers.
- Elasticsearch search infrastructure — indexing, query design, relevance tuning, and cluster scaling for person/company search and filtering APIs.
- ETL pipelines — workers, task queues, and transformations moving data into APIs and feed exports.
- Customer-facing web app and developer experience — React/TypeScript app, docs, onboarding flows, and self-serve surfaces.
- Compliance and observability — data-sourcing proofs, GDPR/PII handling, and scoreboard metrics for each Data Quality dimension.
Core Responsibilities
Product & Application Development
- Build and maintain Forager's customer-facing web app (React, TypeScript, Django/Python).
- Implement and maintain RESTful APIs for integrations, feeds, and platform customer workflows.
- Develop scalable backend services — workers, task queues, data pipelines — that keep refresh cycles predictable and fill rates high.
- Participate actively in product planning; help shape which features have the highest customer impact.
Search, Data Layer & ETL
- Build and operate Elasticsearch indices for people/company search — schema, ingestion, relevance, scaling.
- Design and operate ETL applications moving data into searchable stores, feeds, and warehouses (Snowflake, S3).
- Optimize PostgreSQL — query performance, indexing, cache utilization.
- Drive measurable improvements in latency, uptime, error rate, and scalability.
DevOps & Infrastructure
- Own day-to-day AWS infrastructure (ECS, S3, etc.) alongside DevOps.
- Operate CI/CD, observability (Grafana, CloudWatch, Sentry), and on-call response for the surfaces you build.
- Share crawler infrastructure maintenance with the team.
Collaboration & Quality
- Code review with high standards for readability, security, and performance.
- Write unit, integration, and E2E tests — test reliability is a quality contributor, not overhead.
- Document features, architecture, and API contracts; great developer docs are how our customers succeed.
Requirements
Required Experience
- 5+ years building and operating production web applications and APIs.
- Strong proficiency in Python / Django and React / TypeScript .
- Hands-on experience operating Elasticsearch at scale — schema design, query tuning, cluster management.
- Production experience with PostgreSQL , Redis , and async task systems ( Celery / RabbitMQ or equivalent).
- Demonstrated track record building and operating ETL pipelines that move significant data volumes reliably.
- Comfortable with AWS (ECS, S3, CloudWatch) and CI/CD pipelines (GitHub Actions or equivalent).
- Experience operating services in production — observability, on-call, incident response.
- Strong written communication; comfortable owning documentation as a deliverable.
AI & Agentic Workflows (Required)
This is non-negotiable. You must demonstrate strong, hands-on fluency with:
- AI coding tools (Claude Code, Cursor, Copilot, or equivalent) used daily for implementation, refactoring, and code review.
- Agentic workflows — designing, orchestrating, and debugging multi-step agent pipelines (e.g., research → plan → implement → verify loops, MCP server integration, tool-use design).
- Judgment about where AI helps vs. hurts — knowing when to delegate to an agent, when to write the code yourself, and how to keep an agent on rails for production work.
We evaluate this in interviews with live exercises. Candidates without demonstrable agentic workflow experience will not be considered.
Nice to Have
- Experience with Snowflake or other data warehouses.
- Background in B2B data products — enrichment, contact data, company data, search/discovery.
- Experience with web crawling, data sourcing, or large-scale ingestion systems.
- Open-source contributions or public technical writing.
Benefits
- Remote first culture.
- Unliminted PTO.
- Competitive salary and benefits package.
- Work in a fast-paced, collaborative, and supportive environment.
- Opportunity to grow and advance your career.
- Opportunity to be on the ground floor of a fast-growing startup.