Roles and Responsibilities
Location - Bangalore, Chennai, Hyderabad, Pune, Kochi, Bhubaneshawar, Kolkata
Exp - 5+
- Design, develop, test, deploy, and maintain large-scale data pipelines using AWS services such as S3, Lambda, Step Functions, etc.
- Develop ETL processes using PySpark and Python to extract insights from structured and unstructured data sources.
- Collaborate with cross-functional teams to identify business requirements and design solutions that meet those needs.
- Ensure high-quality code by writing unit tests and integrating with CI/CD pipelines.
- Troubleshoot issues related to data processing workflows and provide timely resolutions.