DevOps Data Engineer - Spark & Scala || 4 To 15 years || Bangalore - Bengaluru

Full Time 1 month ago
Employment Information

Role & responsibilities


  • Design, build, and maintain data pipelines using Scala, Spark, and SQL.
  • Develop and optimize data transformations using DataFrames.
  • Manage and maintain our data infrastructure on cloud platforms (preferably AWS or GCP).
  • Implement and manage data streaming solutions using MSK (Managed Streaming for Kafka) or Kafka.
  • Design and implement data storage solutions using S3 or similar object storage.
  • Automate infrastructure provisioning, configuration management, and data pipeline deployments using tools such as Terraform, Ansible, or CloudFormation.
  • Build and maintain CI/CD pipelines for automated data pipeline deployments using tools such as Jenkins, GitLab CI, or CircleCI.
  • Monitor data pipeline performance and identify areas for optimization.
  • Troubleshoot and resolve data pipeline and infrastructure issues in a timely manner.
  • Implement data quality checks and monitoring.
  • Implement security best practices across our data infrastructure and pipelines.
  • Collaborate with data scientists and data engineers to understand their data needs and build solutions to meet those needs.
  • Participate in on-call rotations to ensure 24/7 system availability.
  • Contribute to the development of internal tools and automation scripts.
  • Stay up-to-date with the latest data engineering and DevOps technologies and trends.

Preferred candidate profile


  • Bachelor's degree in Computer Science, Data Science, or a related field (or equivalent experience).
  • 5+ years of experience in a data engineering or DevOps role.
  • Strong proficiency in Scala, Spark, and SQL.
  • Experience with DataFrames.
  • Experience with cloud platforms (preferably AWS or GCP).
  • Experience with MSK (Managed Streaming for Kafka) or Kafka.
  • Experience with S3 or similar object storage.
  • Experience with infrastructure-as-code tools such as Terraform or CloudFormation.
  • Experience with configuration management tools such as Ansible or Chef.
  • Experience with CI/CD tools such as Jenkins, GitLab CI, or CircleCI.
  • Proficiency in scripting languages such as Python or Bash.
  • Strong understanding of data warehousing concepts.
  • Strong troubleshooting and problem-solving skills.
  • Excellent communication and collaboration skills.

Other Details
Industry Type: IT Services & Consulting,
Employment Type: Full Time, Permanent
Role Category: Software Development
 
TCS Hiring DevOps Data Engineer - Spark & Scala || 4 To 15 years || Bangalore in Bengaluru, Apply TCS Careers in Bengaluru.