Job Description
About the Team and Role: Data Engineering
Mfine’s Technologies is having a positive, real-life impact on millions of patients across the country. At MFine, we have the tremendous and rewarding challenge of putting access to quality healthcare into the hands of all Indians and forever changing how healthcare is experienced.
Our products use the power of mobile and Artificial intelligence to evolve clinical protocols and aid in the diagnosis and treatment of over 2000 common diseases and power a pan India network of diagnostics and pharmacies allowing quality care to reach everyone.
100s of distributed systems, services AI models, millions of events, and terabytes of data power our products. From a Virtual AI doctor ( called the Autodoc which is able to use a multi-modal inference engine to assist doctors) to knowledge graphs to e-commerce engines to logistics systems, our product and engineering teams are building the future of healthcare systems.
Join us to power and drive the future of digital healthcare in India.
Data Engineering (Experience 2-4 Years)
In this role, you will be responsible for a range of duties from data analytics, to implementing and delivering advanced data engineering. You will get chance to leverage your business acumen, programming skills, and technical knowledge of big data and data systems at scale.
In this role, you will :
– Work with business teams in understanding the overall client requirements and proposed Big Data Solution
– Experience in building Data Pipelines, working on ETL tools & Data Platforms (AWS)
– Be hands-on in leading and/or performing development work during the production cycle.
For this role, we value People who have.
– Experience in ETL, Python, PySpark, and Big Data Technologies like Apache Parque, Avro, and ORC file format.
– Experience in Data Base design and implementation
– Experience in writing complex SQL queries
– Experienced in Databases like MySQL, PostgreSQL and MS SQL Server.
– Should be well versed with generally used Big Data paradigms like Map Reduce, abstractions over MR like Hive(warehousing), Job Orchestration tools like Airflow etc.
– Experience in working with Python, Node.js and Java projects,
– Experience in AWS cloud
– Gather and process raw data at scale (including writing scripts, calling APIs, writing SQL queries, etc.).
– Design and develop data structures that support high-performing and scalable analytic applications.