Data Engineer | Building AI-Ready Data Platforms for Global Enterprise and Leading Consumer Brands | Contract Role
- Remote
Full Job Description
Role Overview
This is a contract Data Engineer role focused on building production-grade, AI-ready data platforms used by global enterprise and leading consumer brands. The work involves complex data systems where reliability, scale, and thoughtful architecture are essential. This is a hands-on role suited for engineers who have shipped real-world systems into production environments.
About the Organization
This role sits within a technology-first, digital agency network built on a rare balance of deep engineering and strong creative thinking at scale. The organization partners with large, complex businesses to solve challenges driven by evolving consumer behavior, emerging technologies, and AI-led transformation.
With a global footprint and the ability to work across the entire consumer journey, the teams here are known for taking on high-impact, hard-to-solve problems rather than incremental work. The environment blends engineering rigor with strategic and creative depth, making it well suited for data professionals who want their work to influence real products, platforms, and customer experiences used by millions.
You’ll collaborate with multidisciplinary teams across engineering, data, and strategy, contributing to systems that support enterprise-grade platforms and high-visibility digital ecosystems for some of the world’s most established organizations.
Key Responsibilities
Core Engineering
Write clean, modular, production-grade Python for data platforms and pipelines.
Containerize workloads using Docker and deploy and manage services on Kubernetes.
Define, manage, and evolve infrastructure using Terraform, following infrastructure-as-code best practices.
Data Stack & Pipelines
Design, build, and maintain complex data pipelines and workflows.
Orchestrate pipelines using Apache Airflow and Dagster.
Build and manage transformation layers using dbt.
Work across cloud platforms such as AWS, GCP, or Azure.
Integrate and manage streaming and SaaS tools including Kafka and Fivetran.
Own data pipelines end to end, with experience shipping at least two complex systems to production.
AI-Ready Data Architecture
Process and prepare unstructured data such as text, documents, and logs for LLM consumption.
Build and maintain chunking and embedding pipelines.
Work with modern databases, including vector databases such as Pinecone, Milvus, or pgvector.
Apply graph data modeling concepts using tools such as Neo4j to support advanced retrieval use cases.
Make informed architectural decisions across data lakes, data warehouses, and relational databases, including indexing and search optimization.
Quality, Security & Governance
Build automated test pipelines to ensure data quality and integrity.
Apply best practices for PII protection, access control, and data governance.
Ensure data systems meet reliability and security expectations in production environments.
What This Role Requires
Strong production experience building, deploying, and owning data pipelines.
Deep understanding of modern data stacks and cloud-native systems.
An engineering-first mindset focused on reliability, clarity, and scalable design.
High Impact Jobs: CareerXperts Jobs
Follow CareerXperts on LinkedIn: CareerXperts Consulting