Responsibilities
Global Banking Technology & Operations (GBTO) delivers day-to-day services to Global Banking & Investor Solutions (GBIS) Business Units and their clients. With IT & Operations teams working side by side under the same leadership, the goal of GBTO is to meet the evolving needs of its clients and market requirements, as well as anticipate technological advances to accelerate and support the transformation of GBIS activities. GBTO is at the very heart of this technological and operational challenge.
The Listed Derivatives Transactions Service Department is primarily engaged to provide IT solutions for the Equity Prime Services business line clients externally and internally. Working in partnership with Prime Service Front Office and Operations, we deliver innovative and robust solutions, with a desire to produce market leading solutions.
As a Sr. Big Data Java Engineer, you will work in the development of our data lake streaming platform in Azure. As member of the Feature Team, you will work in autonomy, which includes the following development tasks:
- Architect, design and develop Kafka Stream based Java applications in Azure.
- Architect, design and develop data pipelines for Big Data volume using Spark (Java) in Azure.
- Write high quality code in Java.
- Design, develop and deploy systems with scalability and resiliency in mind.
- Review code, offer improvements to design, process and help team improve.
- Work with distributed systems with huge volume of data.
- Good working knowledge troubleshooting performance issue.
- Support deconstruction of customer requests into detailed stories by interacting with the Product Owner.
- Deliver working code that meets acceptance criteria as well as meets the definition of done at different levels.
- Write code, deploy scripts, unit test, check code to source code repository, and monitor delivery pipeline activity to ensure product quality and consistency.
- Conduct testing, deployment, and production activities of the team to ensure production stability, applying the guidelines provided by the chapter.
- Engage in pair programming to write high quality code that’s easy to understand and support.
- Write tests - very often before the associated code - at unit level with Junit, Mockito and in BDD style with Cucumber.
- Attend backlog refinement and planning sessions to discuss and estimate (small, medium, large) upcoming stories.
Profile Required
Technical Skills:
- Must have experience with Java, Kafka Streams and Spark.
- Good working knowledge of distributed systems.
- Must be comfortable with system design in designing Big Data systems for both batch and real time.
- Must have working experience with Spark jobs and troubleshooting performance issue.
- Must have working experience in dealing with high volume of data – batch and realtime processing.
- Cloud experience is must – AWS or Azure.
- Sound knowledge of Spring Boot or another Java back-end framework, Kafka, Elastic Search, Kibana, & Kubernetes.
- Strong experience with Cloud & Big Data technologies like Spark and Kafka.
- Designing RESTful APIs and integrating third party RESTful APIs.
- Working familiarity with code revising and branding, ideally Git.
Competencies:
- Comfortable working in agile methodologies, ideally Scrum.
- Experience with automated testing approaches - test driven development, unit testing, integration testing, and BDD testing.
- Exposure to continuous integration tools.
- Understanding of service-oriented architectures and message brokers.
- Strong analytical skills and problem-solving ability.
- Results oriented, able to set goals and priorities that maximize the use of resources available to consistently deliver quality results.
- Team-oriented, client-focused and open to different ideas/viewpoints.
Experience Needed:
- 7+ years of experience working as Java Senior Programmer.
- 3+ years of experience working in Spark, Kafka and cloud.
Educational Requirements:
- BS/master’s degree in computer science, Information Technology or relevant technical field.