Data Engineer
Location: Dallas, TX (Hybrid – 3 days onsite)
Duration: Long-term Project | Interview: In-person
Join our growing data team working on exciting cloud migration and real-time data solutions. We’re looking for a hands-on Data Engineer who can build strong pipelines, work with cloud tools, and help us manage and process data at scale.
Roles and Responsibilities:
- Migrate data from on-prem Teradata to AWS Redshift with zero data loss.
- Build batch and real-time data pipelines using Python .
- Set up and manage Kafka on AWS for streaming data.
- Automate ETL workflows using AWS Glue and store data in S3.
- Design data solutions in Redshift for fast querying.
- Create reusable frameworks to reduce development time and improve code.
- Build dashboards and reports for client data insights.
- Use GitLab CI/CD for testing and deployments.
- Work with AWS tools like Lambda, SQS, SNS, Glue, DynamoDB, OpenSearch, and more.
- Develop tools to generate mock data (JSON, CSV, XML).
- Deploy containerized apps using Kubernetes.
- Follow best practices to ensure data security and compliance.
-
Qualifications:
- Strong experience in Python, and cloud technologies - AWS.
- Good understanding of Kafka, ETL, and data warehouses.
- Familiarity with CI/CD, boto3, and Kubernetes is a big plus.