About Octopus Energy
Since 2015, Octopus has been on a mission to bring affordable, green energy to the world. With the help of our in-house developed technology platform, Kraken, weâve become the 2nd largest energy supplier in the UK and licensed our software to retail giants including E.ON in the UK, Origin Energy in Australia and Tokyo Gas in Japan. Weâve reinvented energy products with smart, data-driven tariffs to balance customer demand with renewable generation - and weâre the biggest investor in renewable generation in the UK.
We've since expanded our tentacles to Australia, and we are now looking for Data Engineer, based in Melbourne. Working with Octopus Energy is a chance to join an exciting scale-up business within the energy supply sector, one thatâs at the forefront of changing the landscape of the energy industry across the world. A role with us offers you the chance to contribute to building world-class operations that will catapult you into a fantastic career.
About the team
At Octopus we have developed a platform to provide data services to our retail businesses and clients across 14 deployments globally. The data platform engineering team develops and manages the back-end systems and processes supporting this platform, predominantly working with time-series data fed into the system by our customersâ smart meters. We have developed a suite of applications to process, transform, and make this data available to downstream services. To process the volume of data we typically see, our applications run PySpark or dbt jobs using Databricks as our Spark engine. We also leverage Airflow and Kubernetes heavily. We also employ software engineering best practices to design, test, and deploy our data platform and services.
More about the role
This is a fantastic opportunity to work on data problems that genuinely move us closer to Net Zero with a company that is passionate about building great technology to change the way customers use energy. As a data engineer, you will be responsible for supporting the constant improvement of our data systems. As well as working on our core datalake transformations you may be developing custom applications to process third-party data, managing infrastructure in our AWS deployment, or working to expand our presence across our APAC (Australia, New Zealand, Japan) deployments.
Key responsibilities:
- Supporting the development of our core PySpark applications
- Supporting the development of our dbt models
- Building and maintaining data pipelines in Airflow
- Responding to internal requests for tooling and data pipelines
More about you, our ideal candidate
We are looking for someone who will self-manage, adapt to our changing business requirements, and proactively work to scope problems and deliver pragmatic solutions.
Ideally, you have experience:
- Building applications in Python
- Working in a Spark or Databricks environment
- Working in a software or data engineering environment
We also want our data engineers to be great software engineers with a passion for writing quality code, so it would be helpful to have experience in at least some of the following:
- SQL
- dbt
- Cloud data platforms (Ideally AWS)
- Kubernetes
- Version control and DevOps best practices
\nWe would prefer someone who can work in our Melbourne office hybrid / remote of 1-2 days a week onsite but it's not a deal breaker. You do need to have the rights to work in Australia.
We're very excited to be growing our team. We're looking for skills and experience to help shape and define the future of not only our team, but the wider business at a global scale. If you're reading this and grinning, please apply! There are huge challenges to tackle, and we need amazing people who are keen to get stuck in.
$5.000 - $9.166,67 / mes