Responsibilities :- Collaborate with the development team to understand data requirements and identify potential scalability issues.- Design, develop, and implement scalable data pipelines and ETL processes to ingest, process, and analyse large - volumes of data from various sources.- Optimize data models and database schemas to improve query performance and reduce latency.- Monitor and troubleshoot the performance of our Cassandra database on Azure Cosmos DB, identifying bottlenecks and implementing optimizations as needed.- Work with cross-functional teams to ensure data quality, integrity, and security.- Stay up to date with emerging technologies and best practices in Data Engineering and distributed systems.Qualifications & Requirements :- Proven experience as a Data Engineer or similar role, focusing on designing and optimizing large-scale data systems.- Strong proficiency in working with NoSQL databases, particularly Cassandra.- Experience with cloud-based data platforms, preferably Azure Cosmos DB.- Solid understanding of distributed systems, data modeling, Data warehouse Designing, and ETL processes.- Detailed understanding of Software Development Life Cycle (SDLC) is required.- Good to have knowledge on any visualization tool like Power BI, Tableau.- Good to have knowledge on SAP landscape (SAP ECC, SLT, BW, HANA etc).- Good to have experience on Data Migration Project.- Knowledge of Supply Chain domain would be a plus. (ref:hirist.tech)