Job Description: Development of data integration and analysis solutions using various big data technologies with a focus mainly on Spark/Scala and Java development. Job Responsibilities: Testing and peer code review. Accurately estimate project activities and communicate deviations promptly to the lead or project manager for appropriate action by participating in development planning and progress review sessions. Attend all scoping and planning workshops for BAU and project initiatives and provide input into the design process to ensure that the solution meets the business requirements. Attend relevant project progress review sessions and provide feedback when required. Job Requirements: 7+ years of Big Data development experience using Spark, Scala, Java Some exposure to AWS services utilizing but not limited to EMR, Glue, Athena, S3, RDR, Step Functions, Lambda, Pipeline is beneficial Understanding the end to end solutions delivery lifecycle Good understanding of metadata driven architecture Strong understanding of batch processing and scripting Strong Hadoop architecture knowledge with the ability to troubleshoot and optimize poorly performing tasks Good understanding of Java programming