US Jobs US Jobs     UK Jobs UK Jobs     EU Jobs EU Jobs

   

Data Engineer III - ETL, AWS

Be part of a dynamic team where your distinctive skills will contribute to a winning culture and team.

As a Data Engineer III at JPMorgan Chase within the Consumer and community banking- Wealth Management Tech, you serve as a seasoned member of an agile team to design and deliver trusted data collection, storage, access, and analytics solutions in a secure, stable, and scalable way.

You are responsible for developing, testing, and maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firm's business objectives.

Job responsibilities


* Supports review of controls to ensure sufficient protection of enterprise data


* Advise and making custom configuration changes in one to two tools to generate a product at the business or customer request also updates logical or physical data models based on new use cases


* Frequently uses SQL and understands NoSQL databases and their niche in the marketplace


* Adds to team culture of diversity, equity, inclusion, and respect also creates secure and high-quality production code.


* Produce architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development also gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems.


* Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture.

Required qualifications, capabilities, and skills


* Formal training or certification on software engineering concepts and 3+ years applied experience


* Experience across the data lifecycle spark-based Frameworks for end-to-end ETL, ELT & reporting solutions using key components like Spark SQL & Spark Streaming.


* Strong hands on working experience of Big Data stack including Spark and Python (Pandas, Spark SQL).


* Good understanding on RDMS database, Relational, No SQL databases and Linux/UNIX.


* Strong knowledge of multi-threading and high volume batch processing.


* Should be good in performance tuning on for Python and Spark along with Autosys or Control-M scheduler.


* Cloud implementation experience with AWS including, AWS Data Services


* Proficiency in Lake formation, Glue ETL (or) EMR, S3, Glue Catalog, Athena, Kinesis (or) MSK, Airflow (or) Lambda + Step Functions + Event Bridge, Data De/Serialization.


* Expertise in at least 2 of the formats: Parquet, AVRO, Fixed Width, AWS Data Security


* Good Understanding of security concepts such as: Lake formation, IAM, Service roles, Encryption, KMS, Secrets Manager.

Preferred qualifications, capabilities, and skills


* Proficiency in automation and continuous delivery methods.


* Proficient in all aspects of the Software Developmen...




Share Job