Data Engineer
As a Data Engineer, you'll play a crucial role in managing the entire lifecycle of data pipelines and integrating data visualization solutions.
Data Pipeline Creation:
• Collaborate with Product Owners and business representatives to define new data pipelines technically.
• Evaluate the complexity associated with each ambition and the required skills.
• Work closely with subject matter experts to determine the necessary scope of data for fueling the pipelines.
• Design and co-build the architecture for ingesting data, considering data cleansing and preparation based on its origin.
• Implement the data transformation layer, leveraging analytics or AI features.
• Establish connections with external services to expose output data.
Data Visualization:
• Utilize Power BI capabilities to share various datasets with internal teams and external customers.
• Develop dashboards for automating the monitoring of AI features and adoption.
• Create customer reports that showcase data transformation results, following guidance from our business representatives and Connected Services Hubs remote support agents.
• Actively contribute to integrating Power BI into the program for creating a new reporting experience.
• Collaborate with UX designers to structure reporting concepts (events, evidence, recommendations).
• Identify all detailed data models.
• Build a semantic database to anticipate future needs, including Generative AI Co-Pilot integration.
• Deploy and maintain report templates.
DevOps / MLOps role:
• Masters model deployment and upgrade in the appropriate environment (Databricks, Dataiku, others...)
• Knowledgeable around the correct implementation of those environment in Azure (or similar environment like AWS or GCP)
• Act as a single point of contact for the analytics team with external technical DevOps organization (Advisor Engineerings, AI Hub)
Qualifications
• Python Programming (5/5): Strong programming skills are essential for data manipulation, transformation, and pipeline development.
• ETL Extract Transfer Load (5/5): Significative experience is required
• Data Preparation and Modeling (5/5): Understanding various kind of data structures will be valuable in the role.
• Sql / Non Sql DB Management (4.5/5): Proficiency in Database Management systems
• MLOps (4/5): Understanding ML concepts for integrating ML models into pipelines.
• DevOps (Azure) (4/5): Familiarity with Azure cloud services for scalable data storage and processing.
• Databricks (3/5): Proficiency with Databricks environment will be valuable
• Dataiku (1.5/5) : an experience with Dataiku platform could be a plus
• Power BI usage (5/5): Ability to create insightful visualizations in Power BI is mandatory
Schedule: Full-time
Req: 0090H5
- Rate: Not Specified
- Location: Bangalore, IN-KA
- Type: Permanent
- Industry: Finance
- Recruiter: Schneider Electric
- Contact: Not Specified
- Email: to view click here
- Reference: 0090H5-en-us
- Posted: 2024-08-31 08:42:15 -
- View all Jobs from Schneider Electric
More Jobs from Schneider Electric
- Pharmacy Clerk (20)
- Courtesy Clerk/Grocery Bagger
- FUEL CENTER/CLERK
- PHARMACY/TECHNICIAN
- STORE/NIGHT CLERK
- Pharmacy Central Fill Mechanic Technician- Second Shift
- Courtesy Clerk/Grocery Bagger
- Bakery/Deli Clerk
- Bakery/Deli Clerk
- Project Lead - Engagement Success
- Law Enforcement Officer/Corporal/Sr. Corporal (Miami)
- Account Coordination Associate
- Cashier
- Law Enforcement Officer/Corporal/Sr. Corporal
- Compliance Business Analyst III
- Courtesy Clerk/Grocery Bagger
- Power Platform Developer
- Certified Pharmacy Tech
- Buyer, Procurement Intake and Compliance
- DRUG-GEN MDSE/CLERK