Databricks Data Engineer (PySpark, Financial Datasets) – Sandton – R40
Responsibilities: • ETL/ELT Development: Develop, test, and deploy robust and efficient data pipelines using PySpark/Scala and the Databricks platform (including Delta Lake and Databricks Workflows). • Data Transformation: Implement complex data transformation logic to clean, enrich, and aggregate financial data from various source systems (e.g., core banking, trading platforms). • Cloud Integration: Integrate Databricks with native cloud services (AWS, Azure, or GCP) for data ingestion (e.g., S3, ADLS) and workflow orchestration (e.g., Azure Data Factory, AWS Glue). • Quality and Testing: Write unit and integration tests for data pipelines and apply data quality checks to ensure accuracy in financial reporting and analysis. • Compliance Support: Apply basic security and access control policies, such as those governed by Unity Catalog, to adhere to the firm's compliance requirements. • Performance: Assist in monitoring and tuning Databricks cluster configurations and Spark job parameters to improve efficiency and reduce cost. Qualifications and Experience:
The Reference Number for this position is NG60854 which is a Contract role in Sandton offering a salary of R400 to R500 per hour salary negotiable based on experience. E-mail Nokuthula on nokuthulag@ e-Merge.co.za or call her for a chat on 011 463 3633 to discuss this and other opportunities. Are you ready for a change of scenery? e-Merge IT recruitment is a niche recruitment agency. We offer our candidates options so that we can successfully place the right people with the right companies, in the right roles. Check out the e-Merge IT website www.e-merge.co.za for more great positions. Posted on 31 Oct 15:32, Closing date 30 Dec Or apply with your Biz CVCreate your CV once, and thereafter you can apply to this ad and future job ads easily. See also: Software Engineer, Engineer | ||||||||