Job Reference #
City
Job Type
Your role
- engineer reliable data pipelines for sourcing, processing, distributing, and storing data in different ways, using cloud data platform infrastructure effectively
- transform data into valuable insights that inform business decisions, making use of our internal data platforms and applying appropriate analytical techniques
- develop, train, and apply Data Engineering techniques to automate
- manual processes, and solve challenging business problems
- ensure the quality, security, reliability, and compliance of our solutions by applying our digital principles and implementing both functional and non-functional requirements.
- build observability into our solutions, monitor production health, help to resolve incidents, and remediate the root cause of risks and issues
- understand, represent, and advocate for client needs
- Codify best practices, methodology and share knowledge with other engineers in UBS
- shape the Reference Data Mastering and Distribution architecture and technology stack within our new cloud-based datalake-house.
Your Career Comeback
Your team
and reengineering of reference data in bank.
You'll play an important role in seeing our transition to Cloud technologies, simplification of IT landscape and improvement in our business operating model.
Our culture centers on partnership with our businesses,transparency, accountability and empowerment, and passion for the future.
Your expertise
- 8+ years of total experience
- experience in Distributed Processing using Databricks (preferred) or Apache Spark
- meaningful experience on Scala
- ability to debug using tools like Ganglia UI, expertise in Optimizing Spark Jobs
- experience and interest in Cloud platforms such as Azure (preferred) or AWS
- the ability to work across structured, semi-structured, and unstructured data, extracting information and identifying linkages across disparate data sets
- expert in creating data structures optimized for storage and various query patterns for e.g. Parquet and Delta Lake
- meaningful experience in at least one database technology such as:
- traditional RDBMS (MS SQL Server, Oracle, PostgreSQL)
- noSQL (MongoDB, Cassandra, Neo4J, CosmosDB, Gremlin)
- understanding of Information Security principles to ensure compliant handling and management of data
- experience in traditional data warehousing / ETL tools (Azure Data factory, Informatica)
- strong problem solving and analytical skills
- proficient at working with large and complex code bases (Github, Gitflow, Fork/Pull Model) and development tools like IntelliJ
- working experience in Agile methodologies (SCRUM, XP, Kanban)
How we hire
Contact Details
UBS Recruiting