SCOPE OF JOB
At Habitat Energy we bring together exceptionally talented and passionate people in the domains of energy trading, data science, software engineering and an in-depth understanding of flexible energy assets. Our aim is to maximise the value of large-scale flexible energy assets (eg, battery storage) so they are attractive investments, are deployed at scale and enable the energy transition. We are looking for smart, motivated people to join our team who share our belief that we can outperform the energy sector dinosaurs, have a positive impact on the planet and have fun doing it together.
Our models take thousands of decisions in the market each day, algorithmically dispatching one of the largest portfolios of merchant batteries in the UK – so we’re looking for an outstanding engineer to bring their data expertise and allow us to continue to scale in the UK and internationally. While we really value curious generalists, specific responsibilities include:
- Build and maintain real-time data streaming from numerous sources in the UK, Australia and USA
- These include current and historical metrics describing the power system (e.g. demand, supply, the actions of specific generators/loads on the system)
- Live asset data (e.g. <1second granularity battery data) for optimisation and long-term modelling
- Support design, development and maintenance of our data architecture
- Work as part of multidisciplinary teams to scope, develop and deliver new products and features
- Improve internal tools for democratisation of our data
- Communicate and document solutions and design decisions
We’re looking for someone who is a great fit for our company. We want people who take accountability, build trust and are innovative. We encourage you to apply even if you may not meet every requirement in this posting. We value diversity and our environment is supportive, challenging and focused on the consistent delivery of high quality, meaningful work.
- Python – 3+ years of experience with (atleast) exposure to the following technologies: ORM (e.g. sqlalchemy), async.
- Strong AWS skills in an 24/7 operational environment & knowledge of AWS data-related technologies
- Expert in SQL(postgres) and experience in designing efficient data models
- Expertise with at least one distributed data processing framework (e.g. Hadoop, Spark, Flink, Storm)
- Experience working with time series data
- You have experience with development, test and production environments, and knowledge and experience of using CI and CD.