Software Engineer
DESCRIPTION:
Duties: Develop Data pipeline software using Python and PySpark.
Design and implement data warehousing and data lake architectures.
Collaborate closely with business users to understand large data processing requirements and challenges.
Conduct comprehensive technical research to identify the most suitable data solutions that address the dynamically changing business needs.
Develop, optimize, and fine-tune Spark Big Data applications, ensuring seamless performance as data volumes grow.
Work in close coordination with cross-functional teams, including Data Engineers, Data Scientists, and Analysts, to ensure alignment with organizational goals and objectives.
Analyze daily trading data volumes to estimate the required compute power for processing in distributed computing environments such as Apache Spark.
Continuously monitor and evaluate the performance of implemented data solutions, identify areas for improvement, and drive enhancements to ensure optimal results.
Develop Python and Pyspark applications to visualize enterprise level data quality standards and metrics.
Work in various Unix/Linux-based operating systems and perform in-depth Shell Scripting.
Implement CI/CD systems, including Jenkins, and follow automation and DevOps best practices.
Work with workflow orchestration tools, such as Autosys or Control-M.
Manage data in different columnar and serialization formats, such as JSON, XML, Parquet, and Avro.
Perform Unit Testing, User Acceptance Testing, Functional testing, and bug fixes in dynamic and rapidly changing environments.
Develop supervised Machine Learning, Deep Learning, and AI algorithms for processing financial data and identifying relationships between attributes.
Consistently adjust and optimize Machine Learning/AI models using daily production quality data until optimal performance is achieved.
Conduct research on latest industry-wise technology trends and best practices.
QUALIFICATIONS:
Minimum education and experience required: Master's degree in Engineering Management, Computer Science, Data Analytics, Data Engineering, or related field of study plus 3 years of experience in the job offered or as a Software Engineer, Software Developer, IT Consultant, or related occupation.
The employer will alternatively accept a Bachelor's degree in Engineering Management, Computer Science, Data Analytics, Data Engineering, or related field of study plus 5 years of experience in the job offered or as Software Engineer, Software Developer, IT Consultant, or related occupation.
Skills Required: Requires experience in the following: building data warehouse and data pipelines using Python, PySpark, SQL, and R; data warehousing and data lake architectures, specifically with platforms including Hadoop, Spark, SQL, and Hive; workflow orchestration tools, including Autosys, JIL Programming, and Control-M; CI/CD systems including Jenkins; automation and DevOps best practices; various data columnar and serialization formats, in...
- Rate: Not Specified
- Location: Jersey City, US-NJ
- Type: Permanent
- Industry: Finance
- Recruiter: JPMorgan Chase Bank, N.A.
- Contact: Not Specified
- Email: to view click here
- Reference: 210565034
- Posted: 2024-10-22 09:39:44 -
- View all Jobs from JPMorgan Chase Bank, N.A.
More Jobs from JPMorgan Chase Bank, N.A.
- Mass Transfer CFD Engineer
- Maintenance Superintendent
- Environmental, Health and Safety Coordinator
- Operations Center Manager
- Maintenance Manager
- Environmental, Health and Safety Manager
- Department Manager - Pulp Dryer and Finishing Area
- Staffing Scheduler
- Production Supervisor
- Maintenance Technician- First shift
- Quality Manager
- Intelligence Analyst
- Regional Security Lead
- Sr. Product Design Engineer - High-Speed Connector
- Senior Product Design Engineer - Datacom Specialty Solutions
- Senior Safety Specialist
- Multi-Craft Maintenance Technician
- Fiber Team Associate
- Instrument Reliability Engineer
- Process Engineer