US Jobs US Jobs     UK Jobs UK Jobs     EU Jobs EU Jobs

   

Software Engineer [Multiple Positions Available]

DESCRIPTION:

Duties: Review, understand, code, optimize, and automate existing one-off data transformation pipelines into discrete, scalable tasks.

Plan, design, and implement data transformation pipelines and monitor operations of the data platform in a production environment.

Collaborate with internal clients and service delivery engineers to identify data needs and intended workflows, and troubleshoot to find workable solutions.

Gather, analyze, and document detailed technical requirements to design and implement solutions, and disseminate information to guide other engineers.

Contribute code to the underlying infrastructure, software development kits, and platforms being built to support bespoke data transformation pipelines and enable predictive models to be produced and run at scale.

Identify engineering opportunities to optimize operational effort and running costs of the data platform.

Mentor junior engineering staff and provide guidance on day- to-day code development work.

QUALIFICATIONS:

Minimum education and experience required: Bachelor's degree in Computer Science, Information Technology, Software Engineering, Mathematics, or related field of study plus 5 years of experience in the job offered or as Software Engineer, Data Engineer/Developer, or related occupation.

Skills Required: This position requires 5 years of experience with the following: Designing and implementing scalable ETL pipelines to process structured and semi-structured data.

This position requires 3 years of experience with the following: Processing data across distributed environments using Apache Spark on Big Data ecosystems such as Cloudera or Hortonworks; Building distributed data processing workflows using Scala, Python, and Java on Spark; Supporting real-time and batch data ingestion, data cleansing and transformation, and feature extraction on Spark; Managing large-scale data lake tables in Parquet and Avro formats; Implementing low- latency, scalable data operations and supporting real-time lookups, updates, and analytics using Apache HBase and Apache Cassandra.

This position requires 2 years of experience with the following: Implementing ACID-compliant data operations and enabling schema evolution using Delta table structures; Implementing partitioning within Hadoop-based architectures; Configuring and maintaining Grafana dashboards integrated with Prometheus, Elasticsearch, or CloudWatch to monitor pipeline performance, API services, and system health in real time; Documenting data workflows, Spring Boot API specifications, CI/CD processes, Grafana configurations, and cloud architecture using Confluence.

This position requires 1 year of experience with the following: Creating and deploying RESTful APIs using Spring Boot in Docker containers to deliver processed data access and operational insights; Managing source code to maintain structured development workflows, version control, and team collaboration using Git with GitHub and Bitbucket; Building,...




Share Job