US Jobs US Jobs     UK Jobs UK Jobs     EU Jobs EU Jobs

   

Sr. Data Engineer

Company

Federal Reserve Bank of Dallas

We are dedicated to serving the public by promoting a strong financial system and a healthy economy for all.

These efforts take a team of dedicated individuals doing many different jobs.

Together we’re creating a workplace where talented people can thrive, and we welcome your unique background and perspective to help present the best possible solutions for our partners.

The Data Engineer responsible for designing, developing, and maintaining data pipelines in support of data engineering and data management activities.

You must be passionate about data engineering and data quality with a solid background in AWS, Databricks, Python, Trino/Starburst and SQL.

Location: #LI-Hybrid

About the Role:

The Sr.

Data Engineer develops data solutions with moderate to high complexity that also have moderate to high business criticality.

The Engineer also develops data set processes; works towards improving data, efficiency, and quality; uses large datasets to address business issues; provides full software development lifecycle support; manages design, security, standards, data quality and compliance processes; works independently with guidance only in the most complex situations.

The engineer may lead functional teams or projects, has in-depth domain knowledge of the relevant industry.

Other responsibilities include requirement analysis, design, code, test, debug, document and maintain data and analytics solutions.

You will report to the Technology Solutions Manager in Dallas who manages an Engineering team consisting of 10 team members with assignments in Dallas and New Yor City.  We are a collaborative, passionate team delivering sustainable software and data-driven solutions that meet the needs of our customers across the Federal Reserve System. 

You Will:


* Design, develop, monitor, and maintain data pipelines in an AWS Gov Cloud ecosystem with Databricks,  Delta Lake and Starburst as the underlying platforms


* Collaborate with cross-functional teams to understand data needs and translate them into effective data pipeline solutions.


* Develop, optimize, and maintain ETL processes to facilitate the smooth and accurate movement of data across systems.


* Establish data quality checks and ensure data integrity and accuracy throughout the data lifecycle.


* Implement and enforce data governance policies and procedures.


* Optimize data processing and query performance for large-scale datasets within AWS and Databricks environments.


* Document data engineering processes, architecture, and configurations.


* Troubleshooting and debugging data-related issues on the AWS Databricks platform.

 


* Having the knowledge about Optimizing AWS Databricks Streaming jobs.


* Integrating Databricks with other data storage and processing systems.

You Have:


* Bachelor’s or master’s degree in computer science, Information Technology, or a related field.


* Minimum ...




Share Job