Distributed Training & Performance Engineer - Executive Director
Are you looking for an exciting opportunity to join a dynamic and growing team in a fast paced and challenging area? This is a unique opportunity for you to work with Global Technology Applied Research (GTAR) center at JPMorganChase.
The goal of GTAR is to design and conduct research across multiple frontier technologies, in order to enable novel discoveries and inventions, and to inform and develop next-generation solutions for the firm's clients and businesses.
As a senior-level engineer in the Global Technology Applied Research (GTAR) center you will design, optimize, and scale large-model pretraining workloads across hyperscale accelerator clusters.
This role sits at the intersection of distributed systems, kernel-level performance engineering, and large-scale model training.
The ideal candidate can take a fixed hardware budget (accelerator type, node topology, interconnect, and cluster size) and design efficient, stable, and scalable training strategy, spanning parallelism layout, memory strategy, kernel optimization, and end-to-end system performance.
This is a hands-on role with direct impact on training throughput, efficiency, and cost at scale.
Job responsibilities
* Design and optimize distributed training strategies for large-scale models, including data, tensor, pipeline, context parallelism.
* Manage end-to-end training performance: from data input pipelines through model execution, communication, and checkpointing.
* Identify and eliminate performance bottlenecks using systematic profiling and performance modeling.
* Develop or optimize high-performance kernels using CUDA, Triton, or equivalent frameworks.
* Design and optimize distributed communication strategies to maximize overlap between computation and inter-node data movement.
* Design memory-efficient training configurations (caching, optimizer sharding, checkpoint strategies).
* Evaluate and optimize training on multiple accelerator platforms, including GPUs and non-GPU accelerators.
* Contribute towards incorporating performance improvements back to internal pipelines.
Required qualifications, capabilities, and skills
* Master's degree with 5+ years of industry experiences, or Ph.D.
degree with 3+ years of industry experience in computer science, physics, math, engineering or related fields.
* Engineering experience at top AI labs, HPC centers, chip vendors, or hyperscale ML infra teams.
* Strong experience designing and operating large-scale distributed training jobs across multinode accelerator clusters.
* Deep understanding of distributed parallelism strategies: data parallelism, tensor/model parallelism, pipeline parallelism, and memory/optimizer sharding.
* Proven ability to profile and optimize training performance using industry standard tools such as Nsight, PyTorch profiler, or equivalent.
* Hands-on experience with GPU programming and kernel optimization.
* Strong understanding of acc...
- Rate: Not Specified
- Location: New York, US-NY
- Type: Permanent
- Industry: Finance
- Recruiter: JPMorgan Chase Bank, N.A.
- Contact: Not Specified
- Email: to view click here
- Reference: 210709165
- Posted: 2026-02-07 07:54:47 -
- View all Jobs from JPMorgan Chase Bank, N.A.
More Jobs from JPMorgan Chase Bank, N.A.
- Yard Truck Driver
- Nightshift Forklift Operator
- Nightshift Saw Operator
- Technical Infrastructure Program Manager Senior
- Machine Tender III (D) - Cell Leader
- Stationary Engineer-II
- Junior Production Specialist
- Nutrition Lead
- Human Resources Business Partner
- Machine Tender (D)
- Surgical Services Business Analyst
- Registered Nurse (Birth Center)
- Clinical Documentation Specialist
- Registered Nurse (NICU)
- Area Sales Manager
- Physical Therapy Assistant
- Registered Nurse RN Case Manager
- Physical Therapy Assistant
- Physical Therapist
- Social Worker MSW