BIG DATA WITH SCALA - Hyderabad

Full Time 1 month ago
Employment Information

Desired Competencies (Technical/Behavioral Competency)

Must-Have**

(Ideally should not be more than 3-5)


  • Strong experience in Hadoop ecosystem (HDFS, Hive, Spark, Oozie).
  • Hands-on expertise in AWS and GCP cloud services for data engineering.
  • Proficiency in Scala and Python programming.
  • Experience with Terraform, DevOps practices, and Git-based workflows.
  • Solid understanding of ETL design principles and data pipeline orchestration.
  • Ability to perform unit testing, integration testing, and ensure E2E quality

Good-to-Have(Ideally should not be more than 3-5)


SN


Responsibility of / Expectations from the Role

1


  • Design, develop, and maintain scalable ETL pipelines for large datasets.
  • Implement data processing workflows using Hadoop, Hive, Spark, Oozie, and Shell scripting.
  • Work with AWS services (IAM, Kinesis, S3, Glue, Athena, Lambda, Step Functions, CloudWatch, AWS IoT) and GCP services (BigQuery, Dataflow, Cloud Functions, Dataproc, Cloud Composer).
  • Develop solutions using Scala and Python for data transformation and analytics.
  • Manage infrastructure as code using Terraform, and maintain version control with Git.
  • Ensure end-to-end design, testing, and deployment of data solutions.
  • Optimize performance and reliability of data pipelines across cloud platforms.



Other Details
Industry Type: IT Services & Consulting,
Employment Type: Full Time, Permanent
Role Category: IT & Information Security - Other
 
TCS Hiring BIG DATA WITH SCALA in Hyderabad, Apply TCS Careers in Hyderabad.